diff --git a/CHANGELOG.md b/CHANGELOG.md index 4025a66c3a..fe5e200d17 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,7 +7,12 @@ and this project adheres to the versioning scheme outlined in the [README.md](RE ## [Unreleased] -- Added support for Clarity 3 +## [3.0.0.0.0] + +### Added + +- **Nakamoto consensus rules, activating in epoch 3.0 at block 867,867** (see [SIP-021](https://github.com/stacksgov/sips/blob/main/sips/sip-021/sip-021-nakamoto.md) for details) +- Clarity 3, activating with epoch 3.0 - Keywords / variable - `tenure-height` added - `stacks-block-height` added @@ -16,10 +21,29 @@ and this project adheres to the versioning scheme outlined in the [README.md](RE - `get-stacks-block-info?` added - `get-tenure-info?` added - `get-block-info?` removed -- Added `/v3/signer/{signer_pubkey}/{reward_cycle}` endpoint -- Added `tenure_height` to `/v2/info` endpoint -- Added optional `timeout_ms` to `events_observer` configuration -- Added support for re-sending events to event observers across restarts +- New RPC endpoints + - `/v3/blocks/:block_id` + - `/v3/blocks/upload/` + - `/v3/signer/:signer_pubkey/:cycle_num` + - `/v3/sortitions` + - `/v3/stacker_set/:cycle_num` + - `/v3/tenures/:block_id` + - `/v3/tenures/fork_info/:start/:stop` + - `/v3/tenures/info` + - `/v3/tenures/tip/:consensus_hash` +- Re-send events to event observers across restarts +- Support custom chain-ids for testing +- Add `replay-block` command to CLI + +### Changed + +- Strict config file validation (unknown fields will cause the node to fail to start) +- Add optional `timeout_ms` to `events_observer` configuration +- Modified RPC endpoints + - Include `tenure_height` in `/v2/info` endpoint + - Include `block_time` and `tenure_height` in `/new/block` event payload +- Various improvements to logging, reducing log spam and improving log messages +- Various improvements and bugfixes ## [2.5.0.0.7] diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 22507d6f33..8d6c3aabba 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -11,7 +11,7 @@ could not only have catastrophic consequences for users (i.e. they lose all their money), but also be intractable to fix, mitigate, or remove. This is because unlike nearly every other kind of networked software, **the state of the blockchain is what the users' computers -say it is.** If you want to make changes, you _must_ get _user_ +say it is.** If you want to make changes, you _must_ get _user_ buy-in, and this is necessarily time-consuming and not at all guaranteed to succeed. @@ -25,28 +25,7 @@ This project and everyone participating in it is governed by this [Code of Condu ## Development Workflow -- For typical development, branch off of the `develop` branch. -- For consensus breaking changes, branch off of the `next` branch. -- For hotfixes, branch off of `master`. - -If you have commit access, use a branch in this repository. If you do -not, then you must use a github fork of the repository. - -### Branch naming - -Branch names should use a prefix that conveys the overall goal of the branch: - -- `feat/some-fancy-new-thing` for new features -- `fix/some-broken-thing` for hot fixes and bug fixes -- `docs/something-needs-a-comment` for documentation -- `ci/build-changes` for continuous-integration changes -- `test/more-coverage` for branches that only add more tests -- `refactor/formatting-fix` for refactors - -The branch suffix must only include ASCII lowercase and uppercase letters, -digits, underscores, periods and dashes. - -The full branch name must be max 128 characters long. +See the branching document in [branching.md](./docs/branching.md). ### Merging PRs from Forks @@ -67,26 +46,28 @@ For an example of this process, see PRs [#3598](https://github.com/stacks-network/stacks-core/pull/3598) and [#3626](https://github.com/stacks-network/stacks-core/pull/3626). - ### Documentation Updates - Any major changes should be added to the [CHANGELOG](CHANGELOG.md). - Mention any required documentation changes in the description of your pull request. -- If adding an RPC endpoint, add an entry for the new endpoint to the - OpenAPI spec `./docs/rpc/openapi.yaml`. +- If adding or updating an RPC endpoint, ensure the change is documented in the + OpenAPI spec: [`./docs/rpc/openapi.yaml`](./docs/rpc/openapi.yaml). - If your code adds or modifies any major features (struct, trait, test, module, function, etc.), each should be documented according to our [coding guidelines](#Coding-Guidelines). ## Git Commit Messages + Aim to use descriptive git commit messages. We try to follow [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/). The general format is as follows: + ``` [optional scope]: [optional body] [optional footer(s)] ``` + Common types include build, ci, docs, fix, feat, test, refactor, etc. When a commit is addressing or related to a particular Github issue, it @@ -97,6 +78,7 @@ fix: incorporate unlocks in mempool admitter, #3623 ``` ## Recommended developer setup + ### Recommended githooks It is helpful to set up the pre-commit git hook set up, so that Rust formatting issues are caught before @@ -104,6 +86,7 @@ you push your code. Follow these instruction to set it up: 1. Rename `.git/hooks/pre-commit.sample` to `.git/hooks/pre-commit` 2. Change the content of `.git/hooks/pre-commit` to be the following + ```sh #!/bin/sh git diff --name-only --staged | grep '\.rs$' | xargs -P 8 -I {} rustfmt {} --edition 2021 --check --config group_imports=StdExternalCrate,imports_granularity=Module || ( @@ -111,52 +94,53 @@ git diff --name-only --staged | grep '\.rs$' | xargs -P 8 -I {} rustfmt {} --edi exit 1 ) ``` + 3. Make it executable by running `chmod +x .git/hooks/pre-commit` That's it! Now your pre-commit hook should be configured on your local machine. # Creating and Reviewing PRs -This section describes some best practices on how to create and review PRs in this context. The target audience is people who have commit access to this repository (reviewers), and people who open PRs (submitters). This is a living document -- developers can and should document their own additional guidelines here. +This section describes some best practices on how to create and review PRs in this context. The target audience is people who have commit access to this repository (reviewers), and people who open PRs (submitters). This is a living document -- developers can and should document their own additional guidelines here. ## Overview -Blockchain software development requires a much higher degree of rigor than most other kinds of software. This is because with blockchains, **there is no roll-back** from a bad deployment. +Blockchain software development requires a much higher degree of rigor than most other kinds of software. This is because with blockchains, **there is no roll-back** from a bad deployment. -Therefore, making changes to the codebase is necessarily a review-intensive process. No one wants bugs, but **no one can afford consensus bugs**. This page describes how to make and review _non-consensus_ changes. The process for consensus changes includes not only the entirety of this document, but also the [SIP process](https://github.com/stacksgov/sips/blob/main/sips/sip-000/sip-000-stacks-improvement-proposal-process.md). +Therefore, making changes to the codebase is necessarily a review-intensive process. No one wants bugs, but **no one can afford consensus bugs**. This page describes how to make and review _non-consensus_ changes. The process for consensus changes includes not only the entirety of this document, but also the [SIP process](https://github.com/stacksgov/sips/blob/main/sips/sip-000/sip-000-stacks-improvement-proposal-process.md). -A good PR review sets both the submitter and reviewers up for success. It minimizes the time required by both parties to get the code into an acceptable state, without sacrificing quality or safety. Unlike most other software development practices, _safety_ is the primary concern. A PR can and will be delayed or closed if there is any concern that it will lead to unintended consensus-breaking changes. +A good PR review sets both the submitter and reviewers up for success. It minimizes the time required by both parties to get the code into an acceptable state, without sacrificing quality or safety. Unlike most other software development practices, _safety_ is the primary concern. A PR can and will be delayed or closed if there is any concern that it will lead to unintended consensus-breaking changes. -This document is formatted like a checklist. Each paragraph is one goal or action item that the reviewer and/or submitter must complete. The **key take-away** from each paragraph is bolded. +This document is formatted like a checklist. Each paragraph is one goal or action item that the reviewer and/or submitter must complete. The **key take-away** from each paragraph is bolded. ## Reviewer Expectations -The overall task of a reviewer is to create an **acceptance plan** for the submitter. This is simply the list of things that the submitter _must_ do in order for the PR to be merged. The acceptance plan should be coherent, cohesive, succinct, and complete enough that the reviewer will understand exactly what they need to do to make the PR worthy of merging, without further reviews. The _lack of ambiguity_ is the most important trait of an acceptance plan. +The overall task of a reviewer is to create an **acceptance plan** for the submitter. This is simply the list of things that the submitter _must_ do in order for the PR to be merged. The acceptance plan should be coherent, cohesive, succinct, and complete enough that the reviewer will understand exactly what they need to do to make the PR worthy of merging, without further reviews. The _lack of ambiguity_ is the most important trait of an acceptance plan. -Reviewers should **complete the review in one round**. The reviewer should provide enough detail to the submitter that the submitter can make all of the requested changes without further supervision. Whenever possible, the reviewer should provide all of these details publicly as comments, so that _other_ reviewers can vet them as well. If a reviewer _cannot_ complete the review in one round due to its size and complexity, then the reviewer may request that the PR be simplified or broken into multiple PRs. +Reviewers should **complete the review in one round**. The reviewer should provide enough detail to the submitter that the submitter can make all of the requested changes without further supervision. Whenever possible, the reviewer should provide all of these details publicly as comments, so that _other_ reviewers can vet them as well. If a reviewer _cannot_ complete the review in one round due to its size and complexity, then the reviewer may request that the PR be simplified or broken into multiple PRs. Reviewers should make use of Github's "pending comments" feature. This ensures that the review is "atomic": when the reviewer submits the review, all the comments are published at once. -Reviewers should aim to **perform a review in one sitting** whenever possible. This enables a reviewer to time-box their review, and ensures that by the time they finish studying the patch, they have a complete understanding of what the PR does in their head. This, in turn, sets them up for success when writing up the acceptance plan. It also enables reviewers to mark time for it on their calendars, which helps everyone else develop reasonable expectations as to when things will be done. +Reviewers should aim to **perform a review in one sitting** whenever possible. This enables a reviewer to time-box their review, and ensures that by the time they finish studying the patch, they have a complete understanding of what the PR does in their head. This, in turn, sets them up for success when writing up the acceptance plan. It also enables reviewers to mark time for it on their calendars, which helps everyone else develop reasonable expectations as to when things will be done. -Code reviews should be timely. Reviewers should start no more than +Code reviews should be timely. Reviewers should start no more than **2 business days** after reviewers are assigned. This applies to each reviewer: i.e., we expect all reviewers to respond within two days. The `develop` and `next` branches in particular often change quickly, so letting a PR languish only creates more merge work for the -submitter. If a review cannot be started within this timeframe, then +submitter. If a review cannot be started within this timeframe, then the reviewers should **tell the submitter when they can begin**. This gives the reviewer the opportunity to keep working on the PR (if needed) or even withdraw and resubmit it. -Reviewers must, above all else, **ensure that submitters follow the PR checklist** below. +Reviewers must, above all else, **ensure that submitters follow the PR checklist** below. **As a reviewer, if you do not understand the PR's code or the potential consequences of the code, it is the submitter's responsibility to simplify the code, provide better documentation, or withdraw the PR.** ## Submitter Expectations -Everyone is busy all the time with a host of different tasks. Consequently, a PR's size and scope should be constrained so that **a review can be written for it no more than 2 hours.** This time block starts when the reviewer opens the patch, and ends when the reviewer hits the "submit review" button. If it takes more than 2 hours, then the PR should be broken into multiple PRs unless the reviewers agree to spend more time on it. A PR can be rejected if the reviewers believe they will need longer than this. +Everyone is busy all the time with a host of different tasks. Consequently, a PR's size and scope should be constrained so that **a review can be written for it no more than 2 hours.** This time block starts when the reviewer opens the patch, and ends when the reviewer hits the "submit review" button. If it takes more than 2 hours, then the PR should be broken into multiple PRs unless the reviewers agree to spend more time on it. A PR can be rejected if the reviewers believe they will need longer than this. -The size and scale of a PR depend on the reviewers' abilities to process the change. Different reviewers and submitters have different levels of familiarity with the codebase. Moreover, everyone has a different schedule -- sometimes, some people are more busy than others. +The size and scale of a PR depend on the reviewers' abilities to process the change. Different reviewers and submitters have different levels of familiarity with the codebase. Moreover, everyone has a different schedule -- sometimes, some people are more busy than others. A successful PR submitter **takes the reviewers' familiarity and availability into account** when crafting the PR, even going so far as to ask in advance if a particular person could be available for review. @@ -172,13 +156,13 @@ Weekly Blockchain Engineering Meeting (information can be found in Discord). A PR submission's text should **answer the following questions** for reviewers: -* What problem is being solved by this PR? -* What does the solution do to address them? -* Why is this the best solution? What alternatives were considered, and why are they worse? -* What do reviewers need to be familiar with in order to provide useful feedback? -* What issue(s) are addressed by this PR? -* What are some hints to understanding some of the more intricate or clever parts of the PR? -* Does this PR change any database schemas? Does a node need to re-sync from genesis when this PR is applied? +- What problem is being solved by this PR? +- What does the solution do to address them? +- Why is this the best solution? What alternatives were considered, and why are they worse? +- What do reviewers need to be familiar with in order to provide useful feedback? +- What issue(s) are addressed by this PR? +- What are some hints to understanding some of the more intricate or clever parts of the PR? +- Does this PR change any database schemas? Does a node need to re-sync from genesis when this PR is applied? In addition, the PR submission should **answer the prompts of the Github template** we use for PRs. @@ -195,7 +179,7 @@ the immediate problem they are meant to solve will be rejected. #### Type simplicity -Simplicity of implementation includes simplicity of types. Type parameters +Simplicity of implementation includes simplicity of types. Type parameters and associated types should only be used if there are at least two possible implementations of those types. @@ -204,17 +188,17 @@ on its own. ### Builds with a stable Rust compiler -We use a recent, stable Rust compiler. Contributions should _not_ +We use a recent, stable Rust compiler. Contributions should _not_ require nightly Rust features to build and run. ### Minimal dependencies -Adding new package dependencies is very much discouraged. Exceptions will be +Adding new package dependencies is very much discouraged. Exceptions will be granted on a case-by-case basis, and only if deemed absolutely necessary. ### Minimal global macros -Adding new global macros is discouraged. Exceptions will only be given if +Adding new global macros is discouraged. Exceptions will only be given if absolutely necessary. ### No compiler warnings @@ -230,162 +214,160 @@ Contributions should not contain `unsafe` blocks if at all possible. ## Documentation -* Each file must have a **copyright statement**. -* Any new non-test modules should have **module-level documentation** explaining what the module does, and how it fits into the blockchain as a whole ([example](https://github.com/stacks-network/stacks-core/blob/4852d6439b473e24705f14b8af637aded33cb422/testnet/stacks-node/src/neon_node.rs#L17)). -* Any new files must have some **top-of-file documentation** that describes what the contained code does, and how it fits into the overall module. +- Each file must have a **copyright statement**. +- Any new non-test modules should have **module-level documentation** explaining what the module does, and how it fits into the blockchain as a whole ([example](https://github.com/stacks-network/stacks-core/blob/4852d6439b473e24705f14b8af637aded33cb422/testnet/stacks-node/src/neon_node.rs#L17)). +- Any new files must have some **top-of-file documentation** that describes what the contained code does, and how it fits into the overall module. Within the source files, the following **code documentation** standards are expected: -* Each public function, struct, enum, and trait should have a Rustdoc comment block describing the API contract it offers. This goes for private structs and traits as well. -* Each _non-trivial_ private function should likewise have a Rustdoc comment block. Trivial ones that are self-explanatory, like getters and setters, do not need documentation. If you are unsure if your function needs a docstring, err on the side of documenting it. -* Each struct and enum member must have a Rustdoc comment string indicating what it does, and how it is used. This can be as little as a one-liner, as long as the relevant information is communicated. +- Each public function, struct, enum, and trait should have a Rustdoc comment block describing the API contract it offers. This goes for private structs and traits as well. +- Each _non-trivial_ private function should likewise have a Rustdoc comment block. Trivial ones that are self-explanatory, like getters and setters, do not need documentation. If you are unsure if your function needs a docstring, err on the side of documenting it. +- Each struct and enum member must have a Rustdoc comment string indicating what it does, and how it is used. This can be as little as a one-liner, as long as the relevant information is communicated. ## Factoring -* **Each non-`mod.rs` file implements at most one subsystem**. It may include multiple struct implementations and trait implementations. The filename should succinctly identify the subsystem, and the file-level documentation must succinctly describe it and how it relates to other subsystems it interacts with. +- **Each non-`mod.rs` file implements at most one subsystem**. It may include multiple struct implementations and trait implementations. The filename should succinctly identify the subsystem, and the file-level documentation must succinctly describe it and how it relates to other subsystems it interacts with. -* Directories represent collections of related but distinct subsystems. +- Directories represent collections of related but distinct subsystems. -* To the greatest extent possible, **business logic and I/O should be - separated**. A common pattern used in the codebase is to place the +- To the greatest extent possible, **business logic and I/O should be + separated**. A common pattern used in the codebase is to place the business logic into an "inner" function that does not do I/O, and - handle I/O reads and writes in an "outer" function. The "outer" + handle I/O reads and writes in an "outer" function. The "outer" function only does the needful I/O and passes the data into the - "inner" function. The "inner" function is often private, whereas + "inner" function. The "inner" function is often private, whereas the "outer" function is often public. For example, [`inner_try_mine_microblock` and `try_mine_microblock`](https://github.com/stacks-network/stacks-core/blob/4852d6439b473e24705f14b8af637aded33cb422/testnet/stacks-node/src/neon_node.rs#L1148-L1216). ## Refactoring -* **Any PR that does a large-scale refactoring must be in its own PR**. This includes PRs that touch multiple subsystems. Refactoring often adds line noise that obscures the new functional changes that the PR proposes. Small-scale refactorings are permitted to ship with functional changes. +- **Any PR that does a large-scale refactoring must be in its own PR**. This includes PRs that touch multiple subsystems. Refactoring often adds line noise that obscures the new functional changes that the PR proposes. Small-scale refactorings are permitted to ship with functional changes. -* Refactoring PRs can generally be bigger, because they are easier to review. However, **large refactorings that could impact the functional behavior of the system should be discussed first** before carried out. This is because it is imperative that they do not stay open for very long (to keep the submitter's maintenance burden low), but nevertheless reviewing them must still take at most 2 hours. Discussing them first front-loads part of the review process. +- Refactoring PRs can generally be bigger, because they are easier to review. However, **large refactorings that could impact the functional behavior of the system should be discussed first** before carried out. This is because it is imperative that they do not stay open for very long (to keep the submitter's maintenance burden low), but nevertheless reviewing them must still take at most 2 hours. Discussing them first front-loads part of the review process. ## Databases -* If at all possible, **the database schema should be preserved**. Exceptions can be made on a case-by-case basis. The reason for this is that it's a big ask for people to re-sync nodes from genesis when they upgrade to a new point release. +- If at all possible, **the database schema should be preserved**. Exceptions can be made on a case-by-case basis. The reason for this is that it's a big ask for people to re-sync nodes from genesis when they upgrade to a new point release. -* Any changes to a database schema must also ship with a **new schema version and new schema migration logic**, as well as _test coverage_ for it. +- Any changes to a database schema must also ship with a **new schema version and new schema migration logic**, as well as _test coverage_ for it. -* The submitter must verify that **any new database columns are indexed**, as relevant to the queries performed on them. Table scans are not permitted if they can be avoided (and they almost always can be). You can find table scans manually by setting the environment variable `BLOCKSTACK_DB_TRACE` when running your tests (this will cause every query executed to be preceded by the output of `EXPLAIN QUERY PLAN` on it). +- The submitter must verify that **any new database columns are indexed**, as relevant to the queries performed on them. Table scans are not permitted if they can be avoided (and they almost always can be). You can find table scans manually by setting the environment variable `BLOCKSTACK_DB_TRACE` when running your tests (this will cause every query executed to be preceded by the output of `EXPLAIN QUERY PLAN` on it). -* Database changes **cannot be consensus-critical** unless part of a hard fork (see below). +- Database changes **cannot be consensus-critical** unless part of a hard fork (see below). -* If the database schema changes and no migration can be feasibly done, then the submitter **must spin up a node from genesis to verify that it works** _before_ submitting the PR. This genesis spin-up will be tested again before the next node release is made. +- If the database schema changes and no migration can be feasibly done, then the submitter **must spin up a node from genesis to verify that it works** _before_ submitting the PR. This genesis spin-up will be tested again before the next node release is made. ## Data Input -* **Data from the network, from Bitcoin, and from the config file is untrusted.** Code that ingests such data _cannot assume anything_ about its structure, and _must_ handle any possible byte sequence that can be submitted to the Stacks node. +- **Data from the network, from Bitcoin, and from the config file is untrusted.** Code that ingests such data _cannot assume anything_ about its structure, and _must_ handle any possible byte sequence that can be submitted to the Stacks node. -* **Data previously written to disk by the node is trusted.** If data loaded from the database that was previously stored by the node is invalid or corrupt, it is appropriate to panic. +- **Data previously written to disk by the node is trusted.** If data loaded from the database that was previously stored by the node is invalid or corrupt, it is appropriate to panic. -* **All input processing is space-bound.** Every piece of code that ingests data must impose a maximum size on its byte representation. Any inputs that exceed this size _must be discarded with as little processing as possible_. +- **All input processing is space-bound.** Every piece of code that ingests data must impose a maximum size on its byte representation. Any inputs that exceed this size _must be discarded with as little processing as possible_. -* **All input deserialization is resource-bound.** Every piece of code +- **All input deserialization is resource-bound.** Every piece of code that ingests data must impose a maximum amount of RAM and CPU - required to decode it into a structured representation. If the data + required to decode it into a structured representation. If the data does not decode with the allotted resources, then no further processing may be done and the data is discarded. For an example, see how the parsing functions in the http module use `BoundReader` and `MAX_PAYLOAD_LEN` in [http.rs](https://github.com/stacks-network/stacks-core/blob/4852d6439b473e24705f14b8af637aded33cb422/src/net/http.rs#L2260-L2285). -* **All network input reception is time-bound.** Every piece of code that ingests data _from the network_ must impose a maximum amount of time that ingestion can take. If the data takes too long to arrive, then it must be discarded without any further processing. There is no time bound for data ingested from disk or passed as an argument; this requirement is meant by the space-bound requirement. +- **All network input reception is time-bound.** Every piece of code that ingests data _from the network_ must impose a maximum amount of time that ingestion can take. If the data takes too long to arrive, then it must be discarded without any further processing. There is no time bound for data ingested from disk or passed as an argument; this requirement is meant by the space-bound requirement. -* **Untrusted data ingestion must not panic.** Every piece of code that ingests untrusted data must gracefully handle errors. Panicking failures are forbidden for such data. Panics are only allowed if the ingested data was previously written by the node (and thus trusted). +- **Untrusted data ingestion must not panic.** Every piece of code that ingests untrusted data must gracefully handle errors. Panicking failures are forbidden for such data. Panics are only allowed if the ingested data was previously written by the node (and thus trusted). ## Non-consensus Changes to Blocks, Microblocks, Transactions, and Clarity -Any changes to code that alters how a block, microblock, or transaction is processed by the node should be **treated as a breaking change until proven otherwise**. This includes changes to the Clarity VM. The reviewer _must_ flag any such changes in the PR, and the submitter _must_ convince _all_ reviewers that they will _not_ break consensus. +Any changes to code that alters how a block, microblock, or transaction is processed by the node should be **treated as a breaking change until proven otherwise**. This includes changes to the Clarity VM. The reviewer _must_ flag any such changes in the PR, and the submitter _must_ convince _all_ reviewers that they will _not_ break consensus. -Changes that touch any of these four code paths must be treated with the utmost care. If _any_ core developer suspects that a given PR would break consensus, then they _must_ act to prevent the PR from merging. +Changes that touch any of these four code paths must be treated with the utmost care. If _any_ core developer suspects that a given PR would break consensus, then they _must_ act to prevent the PR from merging. ## Changes to the Peer Network -Any changes to the peer networking code **must be run on both mainnet and testnet before the PR can be merged.** The submitter should set up a testable node or set of nodes that reviewers can interact with. +Any changes to the peer networking code **must be run on both mainnet and testnet before the PR can be merged.** The submitter should set up a testable node or set of nodes that reviewers can interact with. Changes to the peer network should be deployed incrementally and tested by multiple parties when possible to verify that they function properly in a production setting. ## Performance Improvements -Any PRs that claim to improve performance **must ship with reproducible benchmarks** that accurately measure the improvement. This data must also be reported in the PR submission. +Any PRs that claim to improve performance **must ship with reproducible benchmarks** that accurately measure the improvement. This data must also be reported in the PR submission. For an example, see [PR #3075](https://github.com/stacks-network/stacks-core/pull/3075). ## Error Handling -* **Results must use `Error` types**. Fallible functions in the -codebase must use `Error` types in their `Result`s. If a new module's -errors are sufficiently different from existing `Error` types in the -codebaes, the new module must define a new `Error` type. Errors that -are caused by other `Error` types should be wrapped in a variant of -the new `Error` type. You should provide conversions via a `From` -trait implementation. +- **Results must use `Error` types**. Fallible functions in the + codebase must use `Error` types in their `Result`s. If a new module's + errors are sufficiently different from existing `Error` types in the + codebaes, the new module must define a new `Error` type. Errors that + are caused by other `Error` types should be wrapped in a variant of + the new `Error` type. You should provide conversions via a `From` + trait implementation. -* Functions that act on externally-submitted data **must never panic**. This includes code that acts on incoming network messages, blockchain data, and burnchain (Bitcoin) data. +- Functions that act on externally-submitted data **must never panic**. This includes code that acts on incoming network messages, blockchain data, and burnchain (Bitcoin) data. -* **Runtime panics should be used sparingly**. Generally speaking, a runtime panic is only appropriate if there is no reasonable way to recover from the error condition. For example, this includes (but is not limited to) disk I/O errors, database corruption, and unreachable code. +- **Runtime panics should be used sparingly**. Generally speaking, a runtime panic is only appropriate if there is no reasonable way to recover from the error condition. For example, this includes (but is not limited to) disk I/O errors, database corruption, and unreachable code. -* If a runtime panic is desired, it **must have an appropriate error message**. +- If a runtime panic is desired, it **must have an appropriate error message**. ## Logging -* Log messages should be informative and context-free as possible. They are used mainly to help us identify and diagnose problems. They are _not_ used to help you verify that your code works; that's the job of a unit test. +- Log messages should be informative and context-free as possible. They are used mainly to help us identify and diagnose problems. They are _not_ used to help you verify that your code works; that's the job of a unit test. -* **DO NOT USE println!() OR eprintln!()**. Instead, use the logging macros (`test_debug!()`, `trace!()`, `debug!()`, `info!()`, `warn!()`, `error!()`). +- **DO NOT USE println!() OR eprintln!()**. Instead, use the logging macros (`test_debug!()`, `trace!()`, `debug!()`, `info!()`, `warn!()`, `error!()`). -* Use **structured logging** to include dynamic data in your log entry. For example, `info!("Append block"; "block_id" => %block_id)` as opposed to `info!("Append block with block_id = {}", block_id)`. +- Use **structured logging** to include dynamic data in your log entry. For example, `info!("Append block"; "block_id" => %block_id)` as opposed to `info!("Append block with block_id = {}", block_id)`. -* Use `trace!()` and `test_debug!()` liberally. It only runs in tests. +- Use `trace!()` and `test_debug!()` liberally. It only runs in tests. -* Use `debug!()` for information that is relevant for diagnosing problems at runtime. This is off by default, but can be turned on with the `BLOCKSTACK_DEBUG` environment variable. +- Use `debug!()` for information that is relevant for diagnosing problems at runtime. This is off by default, but can be turned on with the `BLOCKSTACK_DEBUG` environment variable. -* Use `info!()` sparingly. +- Use `info!()` sparingly. -* Use `warn!()` or `error!()` only when there really is a problem. +- Use `warn!()` or `error!()` only when there really is a problem. ## Consensus-Critical Code -A **consensus-critical change** is a change that affects how the Stacks blockchain processes blocks, microblocks, or transactions, such that a node with the patch _could_ produce a different state root hash than a node without the patch. If this is even _possible_, then the PR is automatically treated as a consensus-critical change and must ship as part of a hard fork. It must also be described in a SIP. +A **consensus-critical change** is a change that affects how the Stacks blockchain processes blocks, microblocks, or transactions, such that a node with the patch _could_ produce a different state root hash than a node without the patch. If this is even _possible_, then the PR is automatically treated as a consensus-critical change and must ship as part of a hard fork. It must also be described in a SIP. -* **All changes to consensus-critical code must be opened against `next`**. It is _never acceptable_ to open them against `develop` or `master`. +- **All changes to consensus-critical code must be opened against `next`**. It is _never acceptable_ to open them against `develop` or `master`. -* **All consensus-critical changes must be gated on the Stacks epoch**. They may only take effect once the system enters a specific epoch (and this must be documented). +- **All consensus-critical changes must be gated on the Stacks epoch**. They may only take effect once the system enters a specific epoch (and this must be documented). A non-exhaustive list of examples of consensus-critical changes include: -* Adding or changing block, microblock, or transaction wire formats -* Changing the criteria under which a burnchain operation will be accepted by the node -* Changing the data that gets stored to a MARF key/value pair in the Clarity or Stacks chainstate MARFs -* Changing the order in which data gets stored in the above -* Adding, changing, or removing Clarity functions -* Changing the cost of a Clarity function -* Adding new kinds of transactions, or enabling certain transaction data field values that were previously forbidden. +- Adding or changing block, microblock, or transaction wire formats +- Changing the criteria under which a burnchain operation will be accepted by the node +- Changing the data that gets stored to a MARF key/value pair in the Clarity or Stacks chainstate MARFs +- Changing the order in which data gets stored in the above +- Adding, changing, or removing Clarity functions +- Changing the cost of a Clarity function +- Adding new kinds of transactions, or enabling certain transaction data field values that were previously forbidden. ## Testing -* **Unit tests should focus on the business logic with mocked data**. To the greatest extent possible, each error path should be tested _in addition to_ the success path. A submitter should expect to spend most of their test-writing time focusing on error paths; getting the success path to work is often much easier than the error paths. +- **Unit tests should focus on the business logic with mocked data**. To the greatest extent possible, each error path should be tested _in addition to_ the success path. A submitter should expect to spend most of their test-writing time focusing on error paths; getting the success path to work is often much easier than the error paths. -* **Unit tests should verify that the I/O code paths work**, but do so in a way that does not "clobber" other tests or prevent other tests from running in parallel (if it can be avoided). This means that unit tests should use their own directories for storing transient state (in `/tmp`), and should bind on ports that are not used anywhere else. +- **Unit tests should verify that the I/O code paths work**, but do so in a way that does not "clobber" other tests or prevent other tests from running in parallel (if it can be avoided). This means that unit tests should use their own directories for storing transient state (in `/tmp`), and should bind on ports that are not used anywhere else. -* If randomness is needed, **tests should use a seeded random number generator if possible**. This ensures that they will reliably pass in CI. +- If randomness is needed, **tests should use a seeded random number generator if possible**. This ensures that they will reliably pass in CI. -* When testing a consensus-critical code path, the test coverage should verify that the new behavior is only possible within the epoch(s) in which the behavior is slated to activate. Above all else, **backwards-compatibility is a hard requirement.** +- When testing a consensus-critical code path, the test coverage should verify that the new behavior is only possible within the epoch(s) in which the behavior is slated to activate. Above all else, **backwards-compatibility is a hard requirement.** -* **Integration tests are necessary when the PR has a consumer-visible effect**. For example, changes to the RESTful API, event stream, and mining behavior all require integration tests. +- **Integration tests are necessary when the PR has a consumer-visible effect**. For example, changes to the RESTful API, event stream, and mining behavior all require integration tests. -* Every consensus-critical change needs an integration test to verify that the feature activates only when the hard fork activates. +- Every consensus-critical change needs an integration test to verify that the feature activates only when the hard fork activates. PRs must include test coverage. However, if your PR includes large tests or tests which cannot run in parallel (which is the default operation of the `cargo test` command), these tests should be decorated with `#[ignore]`. A test should be marked `#[ignore]` if: - 1. It does not _always_ pass `cargo test` in a vanilla environment - (i.e., it does not need to run with `--test-threads 1`). - - 2. Or, it runs for over a minute via a normal `cargo test` execution - (the `cargo test` command will warn if this is not the case). - +1. It does not _always_ pass `cargo test` in a vanilla environment + (i.e., it does not need to run with `--test-threads 1`). +2. Or, it runs for over a minute via a normal `cargo test` execution + (the `cargo test` command will warn if this is not the case). ## Formatting @@ -406,17 +388,18 @@ cargo fmt-stacks ``` ## Comments + Comments are very important for the readability and correctness of the codebase. The purpose of comments is: -* Allow readers to understand the roles of components and functions without having to check how they are used. -* Allow readers to check the correctness of the code against the comments. -* Allow readers to follow tests. +- Allow readers to understand the roles of components and functions without having to check how they are used. +- Allow readers to check the correctness of the code against the comments. +- Allow readers to follow tests. In the limit, if there are no comments, the problems that arise are: -* Understanding one part of the code requires understanding *many* parts of the code. This is because the reader is forced to learn the meanings of constructs inductively through their use. Learning how one construct is used requires understanding its neighbors, and then their neighbors, and so on, recursively. Instead, with a good comment, the reader can understand the role of a construct with `O(1)` work by reading the comment. -* The user cannot be certain if there is a bug in the code, because there is no distinction between the contract of a function, and its definition. -* The user cannot be sure if a test is correct, because the logic of the test is not specified, and the functions do not have contracts. +- Understanding one part of the code requires understanding _many_ parts of the code. This is because the reader is forced to learn the meanings of constructs inductively through their use. Learning how one construct is used requires understanding its neighbors, and then their neighbors, and so on, recursively. Instead, with a good comment, the reader can understand the role of a construct with `O(1)` work by reading the comment. +- The user cannot be certain if there is a bug in the code, because there is no distinction between the contract of a function, and its definition. +- The user cannot be sure if a test is correct, because the logic of the test is not specified, and the functions do not have contracts. ### Comment Formatting @@ -430,14 +413,13 @@ Comments are to be formatted in typical `rust` style, specifically: - When documenting panics, errors, or other conceptual sections, introduce a Markdown section with a single `#`, e.g.: - ```rust - # Errors - * ContractTooLargeError: Thrown when `contract` is larger than `MAX_CONTRACT_SIZE`. - ``` + ```rust + # Errors + * ContractTooLargeError: Thrown when `contract` is larger than `MAX_CONTRACT_SIZE`. + ``` ### Content of Comments - #### Component Comments Comments for a component (`struct`, `trait`, or `enum`) should explain what the overall @@ -485,7 +467,7 @@ impl<'a, 'b> ReadOnlyChecker<'a, 'b> { This comment is considered positive because it explains the contract of the function in pseudo-code. Someone who understands the constructs mentioned could, e.g., write a test for this method from this description. -#### Comments on Implementations of Virtual Methods +#### Comments on Implementations of Virtual Methods Note that, if a function implements a virtual function on an interface, the comments should not repeat what was specified on the interface declaration. The comment should only add information specific to that implementation. @@ -507,7 +489,7 @@ pub struct ReadOnlyChecker<'a, 'b> { defined_functions: HashMap, ``` -This comment is considered positive because it clarifies users might have about the content and role of this member. E.g., it explains that the `bool` indicates whether the function is *read-only*, whereas this cannot be gotten from the signature alone. +This comment is considered positive because it clarifies users might have about the content and role of this member. E.g., it explains that the `bool` indicates whether the function is _read-only_, whereas this cannot be gotten from the signature alone. #### Test Comments @@ -543,14 +525,14 @@ This comment is considered positive because it explains the purpose of the test Contributors should strike a balance between commenting "too much" and commenting "too little". Commenting "too much" primarily includes commenting things that are clear from the context. Commenting "too little" primarily includes writing no comments at all, or writing comments that leave important questions unresolved. -Human judgment and creativity must be used to create good comments, which convey important information with small amounts of text. There is no single rule which can determine what a good comment is. Longer comments are *not* always better, since needlessly long comments have a cost: they require the reader to read more, take up whitespace, and take longer to write and review. +Human judgment and creativity must be used to create good comments, which convey important information with small amounts of text. There is no single rule which can determine what a good comment is. Longer comments are _not_ always better, since needlessly long comments have a cost: they require the reader to read more, take up whitespace, and take longer to write and review. ### Don't Restate Names in Comments The contracts of functions should be implemented precisely enough that tests could be written looking only at the declaration and the comments (and without looking at the definition!). However: -* **the author should assume that the reader has already read and understood the function name, variable names, type names, etc.** -* **the author should only state information that is new** +- **the author should assume that the reader has already read and understood the function name, variable names, type names, etc.** +- **the author should only state information that is new** So, if a function and its variables have very descriptive names, then there may be nothing to add in the comments at all! @@ -561,7 +543,7 @@ So, if a function and its variables have very descriptive names, then there may fn append_transaction_to_block(transaction:Transaction, &mut Block) -> Result<()> ``` -This is considered bad because the function name already says "append transaction to block", so it doesn't add anything to restate it in the comments. However, *do* add anything that is not redundant, such as elaborating what it means to "append" (if there is more to say), or what conditions will lead to an error. +This is considered bad because the function name already says "append transaction to block", so it doesn't add anything to restate it in the comments. However, _do_ add anything that is not redundant, such as elaborating what it means to "append" (if there is more to say), or what conditions will lead to an error. **Good Example** @@ -573,39 +555,40 @@ This is considered bad because the function name already says "append transactio fn append_transaction_to_block(transaction:Transaction, block:&mut Block) -> Result<()> ``` -This is considered good because the reader builds on the context created by the function and variable names. Rather than restating them, the function just adds elements of the contract that are not implicit in the declaration. +This is considered good because the reader builds on the context created by the function and variable names. Rather than restating them, the function just adds elements of the contract that are not implicit in the declaration. ### Do's and Dont's of Comments -*Don't* over-comment by documenting things that are clear from the context. E.g.: +_Don't_ over-comment by documenting things that are clear from the context. E.g.: - Don't document the types of inputs or outputs, since these are parts of the type signature in `rust`. - Don't necessarily document standard "getters" and "setters", like `get_clarity_version()`, unless there is unexpected information to add with the comment. - Don't explain that a specific test does type-checking, if it is in a file that is dedicated to type-checking. -*Do* document things that are not clear, e.g.: +_Do_ document things that are not clear, e.g.: - For a function called `process_block`, explain what it means to "process" a block. - For a function called `process_block`, make clear whether we mean anchored blocks, microblocks, or both. - For a function called `run`, explain the steps involved in "running". - For a function that takes arguments `peer1` and `peer2`, explain the difference between the two. -- For a function that takes an argument `height`, either explain in the comment what this is the *height of*. Alternatively, expand the variable name to remove the ambiguity. +- For a function that takes an argument `height`, either explain in the comment what this is the _height of_. Alternatively, expand the variable name to remove the ambiguity. - For a test, document what it is meant to test, and why the expected answers are, in fact, expected. ### Changing Code Instead of Comments Keep in mind that better variable names can reduce the need for comments, e.g.: -* `burnblock_height` instead of `height` may eliminate the need to comment that `height` refers to a burnblock height -* `process_microblocks` instead of `process_blocks` is more correct, and may eliminate the need to to explain that the inputs are microblocks -* `add_transaction_to_microblock` explains more than `handle_transaction`, and reduces the need to even read the comment +- `burnblock_height` instead of `height` may eliminate the need to comment that `height` refers to a burnblock height +- `process_microblocks` instead of `process_blocks` is more correct, and may eliminate the need to to explain that the inputs are microblocks +- `add_transaction_to_microblock` explains more than `handle_transaction`, and reduces the need to even read the comment # Licensing and contributor license agreement -`stacks-core` is released under the terms of the GPL version 3. Contributions -that are not licensed under compatible terms will be rejected. Moreover, +`stacks-core` is released under the terms of the GPL version 3. Contributions +that are not licensed under compatible terms will be rejected. Moreover, contributions will not be accepted unless _all_ authors accept the project's contributor license agreement. ## Use of AI-code Generation + The Stacks Foundation has a very strict policy of not accepting AI-generated code PRs due to uncertainly about licensing issues. diff --git a/README.md b/README.md index 6cdb42857f..0279b25116 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,6 @@ Stacks is a layer-2 blockchain that uses Bitcoin as a base layer for security an [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg?style=flat)](https://www.gnu.org/licenses/gpl-3.0) [![Release](https://img.shields.io/github/v/release/stacks-network/stacks-core?style=flat)](https://github.com/stacks-network/stacks-core/releases/latest) -[![Build Status](https://github.com/stacks-network/stacks-core/actions/workflows/ci.yml/badge.svg?branch=master&event=workflow_dispatch&style=flat)](https://github.com/stacks-network/stacks-core/actions/workflows/ci.yml?query=event%3Aworkflow_dispatch+branch%3Amaster) [![Discord Chat](https://img.shields.io/discord/621759717756370964.svg)](https://stacks.chat) ## Building diff --git a/SECURITY.md b/SECURITY.md index e59229b3a1..d3d4ada23d 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -2,7 +2,7 @@ ## Supported Versions -Please see [Releases](https://github.com/stacks-network/stacks-blockchain/releases). It is recommended to use the [most recently released version](https://github.com/stacks-network/stacks-blockchain/releases/latest). +Please see [Releases](https://github.com/stacks-network/stacks-core/releases). It is recommended to use the [most recently released version](https://github.com/stacks-network/stacks-core/releases/latest). ## Reporting a vulnerability @@ -21,61 +21,22 @@ You may also contact us with any questions or to report a vulnerabilty using the | Name | Fingerprint | | ------- | ------------------ | -| security@stacks.org | 8A8B 3C3B 803A 0603 8FB5 3F69 357C 80AB 0885 87A | +| security@stacks.org | ABA3 7FA3 6DBB A591 B0E5 5949 0E94 D065 B32E C7E6 | ``` -----BEGIN PGP PUBLIC KEY BLOCK----- - -mQINBGSBJLgBEACb6bnuvchM5wzmCBh8tvb5Fc90AGmUC9Bfiw85kTNr5N+6Y+fj -Gcyy2ohUEh+5hQE2pJLYzWcEM8ZFomxuwuDkjEkwJHnMofTwPgeP5P9CJUgcOHDa -v/mzzSXze8nhcADiez6QMP1f1m32FoeLpjFyAPkxSzGDv1L8gMHCJn/d1lZyyl52 -1VO6kN6eazEuA9fCCK+ZjUWz5pZCs6QVQ2+3clOoEX+ycomult4/yJhwMHucIPbL -uUGJvpKXkHEi05G2H57mz8sHvz0euRNGTiEUQeVIzaLUmUuViij9KsKk0DSGj3yq -kI+zOcgjAGTMSK00i6bdBV+XZfZlg0uIATr7EGHnb3Lzbvn8lfo/3jaJlQu5elEf -ZlA2nE2dPUfhzY3t8GoroHrbqJaJFd9eZtfTMzwW11KdOzqa0V5FRUkxcBIb81+p -jb2o/YKGWPExX2cHOTYmUdQFM6AtLpif4pMeV11d52vy8LCsjZDwUSZM6lmcg+rL -o2dbBgLvBblHXRtS4UFvx7tHitl5DOk5ZZik3r3jWQmAUXVDBBpq2gaVkponliYv -iVeG+mRLoe+qpvQRMCaw5Rlth0MhqQ26tmpGUIavaFbDqARC8FeIfdov6bUP5/sJ -gaktJrED5T5hNks/N661/AJ8N7JCHJx1exW4TK052PZ2/hHxNSuUEm96VwARAQAB -tClzZWN1cml0eUBzdGFja3Mub3JnIDxzZWN1cml0eUBzdGFja3Mub3JnPokCVAQT -AQgAPhYhBIqLPDuAOgYDj7U/aTV8gKsIhYegBQJkgSS4AhsDBQkHhh87BQsJCAcC -BhUKCQgLAgQWAgMBAh4BAheAAAoJEDV8gKsIhYegWg8P/RsoODRC8QWYnc5oq2Yb -cJSR/0uRcWZVZC/guC553ax89Aro50YsWvd8Z2uakuKKRoc8aPfC4SL1Mufrncwo -9/pIoiB9NQhTAbnp7hBnF5dnIX+Jq4lQIqwG5E36juNiU23qglx3ZZxM5wZrkRi0 -5lsFHpjU4XRkaNgNs6vyiHmtzyR+iESEBY9szfWCRTK8DgOJPLrfDAnc5JWTq7iL -H8pUpClo5p0XFN39lgdhbEISRXaMqY0HJqAI9JKE5UxxRG2uuGbdeHTYu6ji+gz+ -g+gooyVYIVzXVAJHgD9tDsazD+n61epglF0qK0hb+NaRL/2F6KBpmpzY+iDmDkPu -5TTybS52Cm8zSUAsk5w/GSnknep929Cj5hhaD9ijHcLEV0VKSiN0edIPe+Nd57KK -sfggS4l8deD1OjcTxhawRiaKcthdWjm15DzP9WuYEURSpJZAmdSd5Cqx3bSconhW -iYjxAlgZb7t/OJr6N6YQZjga14kwjxia94WNiTz2UQLr/vYAJWQj9RypxL0IrFwr -pJcFnLKec68jLk8opg4LrY9O/gKHQuPDT1EEQ4ssknJAzKxCwrOwCrDvuIzeqzIx -L1mBAtCzF4Q/J1BlmFEIZ7022BycpzPL0VuOrgmyQ6FzEqiKme7Vy/PVWN7H7HhC -qmL2/g9lYt0+gPZazsc8f3NGuQINBGSBJLgBEADTnQe5mX60ClQqigDjAhypkFZY -6k1V850Gp93YsfMYMgzLcyywMo25RT904AF0X72mjF82YZmzOE/b1oSF4ns3nBIg -vCIiEsWTtFMZgerWKcHlYPE0VWR4iGC5DiOLbmrECPQ0JucEErJZWvypgot2R3p/ -hAkEV0CjZp8qObgBf+ViZmfMAkclVtJ5AFB0SQjx6m4ounpKV9euO2db302oMIbM -ssM1F2Dsl7oicAreHOdVZ5OLUkk5nrXmLHtIt6QppPVbWkJA9ArBwAHZ39vLQTBZ -YbehZxWDxzW/HK00CEzb70BwK0HZYFYt9lQwGRUou8dvtk3+nFRsfpAlFoHSLXjp -N+uZBnqQhUeyzT81PkavHpAGTq5ExgT13nyE9vJCPuf5lpthuWiUQYBHu5tUym6G -vHRpT1OyqcbUQUlS+iK24dwxglk2S/NYYOsKyRJ8AhLFQGqMHxlpqNsQ5wxFthZo -ayiP7CwaJFfB5TUe4zWpbMM545BPNQodcB8Njb62tj0ZoAgEbhXerMGrVfUGf6AL -FxcyGhGpjkRI4+e8HfDpiObMw2notIUMXJoYQv3Yf7X/n8QPX2EZDaB8dG43r2Hh -EeEDi6+WOI77LtdVDck71ZXqLukCrusO9HZ6GlB0ohqndRgueGztP82Af3W74Ohj -dEOcK0HC26dKPWhk2wARAQABiQI8BBgBCAAmFiEEios8O4A6BgOPtT9pNXyAqwiF -h6AFAmSBJLgCGwwFCQeGHzsACgkQNXyAqwiFh6CT4A//aOMVH/XIXngvfC/xOdDy -3JnZLtu4kmLfcvxbqEGrNhz1AW4t0Uivt9dgBb4VemgQajhYZyjdLgFhYGvCf446 -V1C79qWa1lwESmSWL63+rXNZMNV+siqnVhICrXw4FhCKP2tfnZ5uT03qTbu0S+9N -4bARjXkfYSxhVqeGmO/ZwuuHXQUojt/XNWBFbbKKM1Y6PlvfWrmX/S2cDAf0QgBd -MMLu7phbUjMzQDsenwiueWaRvDnsQB5GzwOiJheQuKLS1rYlJGnW2cwqjQtQnnC3 -YVb4iCialhAL/GWwjR/r7a6ZxuAB0j2zjKsaxtEMoTaVX3EW3Aoy73dvew0wyakq -OCchiIIJVvB6uXGufqAVVBJAgG7MQIEZLt7M6YSu0gYTdsEnNo7WZYMsX+/NGQ8G -5hguIJZl3MRtax1yPK0e0ergaDaetAhfWwQH2ltAVQColm3LfuLpcyoxYMhdiN86 -ggy4c1t0dS8owuAEdoKScOkOdENYEGF4mkd7nLkU5miaOMxg2NO9prCSpwwxDtt3 -XLkl0yw+0W0rM2Wu5pC0Xw21Cva+uBm3+kfyIRqrtc1Vb3ZrGKzCNQcAvvxq9XM5 -VeE6JLwVj8OP1TFuwmpJJeD5LTZDT0SvmjRB8OuxLwEHHjYtdm0ae0n2Cbou9Y0X -hmf6grobEcyS0PCsLHn3r7Y= -=/YN2 +-----BEGIN PGP PUBLIC KEY BLOCK----- + +mDMEZrJ2wBYJKwYBBAHaRw8BAQdADVWSZGbVgc0SE8XmXkRonl85wXrPHkl9bN0B +jKFBIRS0KXNlY3VyaXR5QHN0YWNrcy5vcmcgPHNlY3VyaXR5QHN0YWNrcy5vcmc+ +iJAEExYIADgWIQSro3+jbbulkbDlWUkOlNBlsy7H5gUCZrJ2wAIbAwULCQgHAgYV +CgkICwIEFgIDAQIeAQIXgAAKCRAOlNBlsy7H5tznAQC6iKqtjCqn2RjtCkr2V6xe +kCe92RfwWsG0415jVpVlDgEA350TCqIT1Jwyqz2aNT2TQ9F6fyKzAiNpLVRImOLH +4Aq4OARmsnbAEgorBgEEAZdVAQUBAQdAvwusRitvUX9hSC8NKS48VTT3LVvZvn87 +JQXRc2CngAEDAQgHiHgEGBYIACAWIQSro3+jbbulkbDlWUkOlNBlsy7H5gUCZrJ2 +wAIbDAAKCRAOlNBlsy7H5oCNAQDae9VhB98HMOvZ99ZuSEyLqXxKjK7xT2P0y1Tm +GuUnNAEAhI+1BjFvO/Hy50DcZTmHWvHJ6/dzibw5Ah+oE458IQo= +=yhSO -----END PGP PUBLIC KEY BLOCK----- ``` diff --git a/docs/SIPS.md b/docs/SIPS.md index abce8c220c..0930f5d51e 100644 --- a/docs/SIPS.md +++ b/docs/SIPS.md @@ -4,4 +4,4 @@ Stacks improvement proposals (SIPs) are aimed at describing the implementation o See [SIP 000](https://github.com/stacksgov/sips/blob/main/sips/sip-000/sip-000-stacks-improvement-proposal-process.md) for more details. -The SIPs now located in the [stacksgov/sips](https://github.com/stacksgov/sips) repository as part of the [Stacks Community Governance organization](https://github.com/stacksgov). +The SIPs are located in the [stacksgov/sips](https://github.com/stacksgov/sips) repository as part of the [Stacks Community Governance organization](https://github.com/stacksgov). diff --git a/docs/branching.md b/docs/branching.md new file mode 100644 index 0000000000..04c1e6fd3d --- /dev/null +++ b/docs/branching.md @@ -0,0 +1,35 @@ +# Git Branching + +The following is a slightly modified version of the gitflow branching strategy described in + +## Main Branches + +- **master** - `master` is the main branch where the source code of HEAD always reflects a production-ready state. +- **develop** - `develop` is the branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. +- **next** - `next` may contain consensus-breaking changes for a future release. +- **release/X.Y.Z.A.n** is the release branch. + +When the source code in the develop branch reaches a stable point and is ready to be released, a release branch is created as `release/X.Y.Z.A.n` (see [release-process.md](./release-process.md)). +After release, the following will happen: + +- `release/X.Y.Z.A.n` branch is merged back to `master`. +- `master` is then merged into `develop`, and development continues in the `develop` branch. +- `develop` is then merged into `next`. + +## Supporting Branches + +Branch names should use a prefix that conveys the overall goal of the branch. +All branches should be based off of `develop`, with the exception being a hotfix branch which may be based off of `master`. + +- `feat/some-fancy-new-thing`: For new features. +- `fix/some-broken-thing`: For hot fixes and bug fixes. +- `chore/some-update`: Any non code related change (ex: updating CHANGELOG.md, adding comments to code). +- `docs/something-needs-a-comment`: For documentation. +- `ci/build-changes`: For continuous-integration changes. +- `test/more-coverage`: For branches that only add more tests. +- `refactor/formatting-fix`: For refactors of the codebase. + +The full branch name **must**: + +- Have a maximum of 128 characters. +- Only includes ASCII lowercase and uppercase letters, digits, underscores, periods and dashes. diff --git a/docs/ci-release.md b/docs/ci-release.md deleted file mode 100644 index f7881ba675..0000000000 --- a/docs/ci-release.md +++ /dev/null @@ -1,355 +0,0 @@ -# Releases - -All releases are built via a Github Actions workflow named `CI` ([ci.yml](../.github/workflows/ci.yml)), and is responsible for: - -- Verifying code is formatted correctly -- Building binary archives and checksums -- Docker images -- Triggering tests conditionally (different tests run for a release vs a PR) - -1. Releases are only created if a tag is **manually** provided when the [CI workflow](../.github/workflows/ci.yml) is triggered. -2. [Caching](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows) is used to speed up testing - a cache is created based on the type of data (i.e. cargo) and the commit sha. tests can be retried quickly since the cache will persist until the cleanup job is run. -3. [nextest](https://nexte.st/) is used to run the tests from an archived file that is cached (using commit sha as a key)) - - Two [archives](https://nexte.st/book/reusing-builds.html) are created, one for genesis tests and one for generic tests (it is done this way to reduce the time spent building) - - Unit-tests are [partitioned](https://nexte.st/book/partitioning.html) and multi-threaded to speed up execution time - -## TL;DR - -- Pushing a feature branch will not trigger a workflow -- An open/re-opened/synchronized PR will produce a single image built from source on Debian with glibc with 2 tags: - - `stacks-core:` - - `stacks-core:` -- A merged PR into `default-branch` from `develop` will produce a single image built from source on Debian with glibc: - - `stacks-core:` -- An untagged build of any branch will produce a single image built from source on Debian with glibc: - - `stacks-core:` -- A tagged release on a non-default branch will produce: - - Docker Alpine image for several architectures tagged with: - - `stacks-core:` - - Docker Debian image for several architectures tagged with: - - `stacks-core:` -- A tagged release on the default branch will produce: - - Github Release of the specified tag with: - - Binary archives for several architectures - - Docker Alpine image for several architectures tagged with: - - `stacks-core:` - - `stacks-core:` - - Docker Debian image for several architectures tagged with: - - `stacks-core:` - - `stacks-core:` - -## Release workflow - -1. Create a feature branch: `feat/fix-something` -2. PR `feat/fix-something` to the `develop` branch where the PR is numbered `112` - 1. Docker image tagged with the **branch name** and **PR number** - - ex: - - `stacks-core:feat-fix-something` - - `stacks-core:pr-112` - 2. CI tests are run -3. PR `develop` to the default branch where the PR is numbered `112` - 1. Docker image tagged with the **branch name** and **PR number** - - ex: - - `stacks-core:feat-fix-something` - - `stacks-core:pr-112` - 2. CI tests are run -4. Merge `develop` branch to the default branch - 1. Docker image is tagged with the **default branch** `master` - - ex: - - `stacks-core:master` - 2. CI tests are run -5. CI workflow is manually triggered on **non-default branch** with a version, i.e. `2.1.0.0.0-rc0` - 1. No Docker images/binaries are created - 2. All release tests are run -6. CI workflow is manually triggered on **default branch** with a version, i.e. `2.1.0.0.0` - 1. Github release for the manually input version is created with binaries - 2. All release tests are run - 3. Docker image pushed with tags of the **input version** and **latest** - - ex: - - `stacks-core:2.1.0.0.0-debian` - - `stacks-core:latest-debian` - - `stacks-core:2.1.0.0.0` - - `stacks-core:latest` - -## Tests - -Tests are separated into several different workflows, with the intention that they can be _conditionally_ run depending upon the triggering operation. For example, on a PR synchronize we don't want to run some identified "slow" tests, but we do want to run the [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) and [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml). - -There are also 2 different methods in use with regard to running tests: - -1. [Github Actions matrix](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs) -2. [nextest partitioning](https://nexte.st/book/partitioning.html) - -A matrix is used when there are several known tests that need to be run. Partitions (shards) are used when there is a large and unknown number of tests to run (ex: `cargo test` to run all tests). - -There is also a workflow designed to run tests that are manually triggered: [Standalone Tests](../.github/workflows/standalone-tests.yml). -This workflow requires you to select which test(s) you want to run, which then triggers a reusable workflow via conditional. For example, selecting "Epoch Tests" will run the tests defined in [Epoch Tests](../.github/workflows/epoch-tests.yml). Likewise, selecting `Release Tests` will run the same tests as a release workflow. - -Files: - -- [Standalone Tests](../.github/workflows/standalone-tests.yml) -- [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) -- [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml) -- [Atlas Tests](../.github/workflows/atlas-tests.yml) -- [Epoch Tests](../.github/workflows/epoch-tests.yml) -- [Slow Tests](../.github/workflows/slow-tests.yml) - -### Adding/changing tests - -With the exception of `unit-tests` in [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml), adding/removing a test requires a change to the workflow matrix. Example from [Atlas Tests](../.github/workflows/atlas-tests.yml): - -```yaml -atlas-tests: - name: Atlas Test - runs-on: ubuntu-latest - strategy: - ## Continue with the test matrix even if we've had a failure - fail-fast: false - ## Run a maximum of 2 concurrent tests from the test matrix - max-parallel: 2 - matrix: - test-name: - - tests::neon_integrations::atlas_integration_test - - tests::neon_integrations::atlas_stress_integration_test -``` - -Example of adding a new test `tests::neon_integrations::atlas_new_test`: - -```yaml - ... - matrix: - test-name: - - tests::neon_integrations::atlas_integration_test - - tests::neon_integrations::atlas_stress_integration_test - - tests::neon_integrations::atlas_new_test -``` - -The separation of tests (outside of [Slow Tests](../.github/workflows/slow-tests.yml)) is performed by creating a separate workflow for each _type_ of test that is being run. Using the example above, to add/remove any tests from being run - the `matrix` will need to be adjusted. - -ex: - -- `Atlas Tests`: Tests related to Atlas -- `Bitcoin Tests`: Tests relating to burnchain operations -- `Epoch Tests`: Tests related to epoch changes -- `Slow Tests`: These tests have been identified as taking longer than others. The threshold used is if a test takes longer than `10 minutes` to complete successfully (or times out semi-regularly), it should be added here. -- `Stacks Blockchain Tests`: - - `full-genesis`: Tests related to full genesis - -### Checking the result of multiple tests at once - -You can use the [check-jobs-status](https://github.com/stacks-network/actions/tree/main/check-jobs-status) composite action in order to check that multiple tests are successful in 1 job. -If any of the tests given to the action (JSON string of `needs` field) fails, the step that calls the action will also fail. - -If you have to mark more than 1 job from the same workflow required in a ruleset, you can use this action in a separate job and only add that job as required. - -In the following example, `unit-tests` is a matrix job with 8 partitions (i.e. 8 jobs are running), while the others are normal jobs. -If any of the 11 jobs are failing, the `check-tests` job will also fail. - -```yaml -check-tests: - name: Check Tests - runs-on: ubuntu-latest - if: always() - needs: - - full-genesis - - unit-tests - - open-api-validation - - core-contracts-clarinet-test - steps: - - name: Check Tests Status - id: check_tests_status - uses: stacks-network/actions/check-jobs-status@main - with: - jobs: ${{ toJson(needs) }} - summary_print: "true" -``` - -## Triggering a workflow - -### PR a branch to develop - -ex: Branch is named `feat/fix-something` and the PR is numbered `112` - -- [Rust format](../.github/workflows/ci.yml) -- [Create Test Cache](../.github/workflows/create-cache.yml) -- [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) -- [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml) -- [Docker image](../.github/workflows/image-build-source.yml) is built from source on a debian distribution and pushed with the branch name and PR number as tags -- ex: - - `stacks-core:feat-fix-something` - - `stacks-core:pr-112` - ---- - -### Merging a branch to develop - -Nothing is triggered automatically - ---- - -### PR develop to master branches - -ex: Branch is named `develop` and the PR is numbered `113` - -- [Rust format](../.github/workflows/ci.yml) -- [Create Test Cache](../.github/workflows/create-cache.yml) -- [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) -- [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml) -- [Docker image](../.github/workflows/image-build-source.yml) is built from source on a debian distribution and pushed with the branch name and PR number as tags -- ex: - - `stacks-core:develop` - - `stacks-core:pr-113` - ---- - -### Merging a PR from develop to master - -- [Rust format](../.github/workflows/ci.yml) -- [Create Test Cache](../.github/workflows/create-cache.yml) -- [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) -- [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml) -- [Docker image](../.github/workflows/image-build-source.yml) is built from source on a debian distribution and pushed with the branch name as a tag -- ex: - - `stacks-core:master` - ---- - -### Manually triggering workflow without tag (any branch) - -- [Rust format](../.github/workflows/ci.yml) -- [Create Test Cache](../.github/workflows/create-cache.yml) -- [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) -- [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml) -- [Docker image](../.github/workflows/image-build-source.yml) is built from source on a debian distribution and pushed with the branch name as a tag -- ex: - - `stacks-core:` - ---- - -### Manually triggering workflow with tag on a non-default branch (i.e. tag of `2.1.0.0.0-rc0`) - -- [Rust format](../.github/workflows/ci.yml) -- [Create Test Cache](../.github/workflows/create-cache.yml) -- [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) -- [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml) -- [Atlas Tests](../.github/workflows/atlas-tests.yml) -- [Epoch Tests](../.github/workflows/epoch-tests.yml) -- [Slow Tests](../.github/workflows/slow-tests.yml) - ---- - -### Manually triggering workflow with tag on default branch (i.e. tag of `2.1.0.0.0`) - -- [Rust format](../.github/workflows/ci.yml) -- [Create Test Cache](../.github/workflows/create-cache.yml) -- [Stacks Blockchain Tests](../.github/workflows/stacks-blockchain-tests.yml) -- [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml) -- [Atlas Tests](../.github/workflows/atlas-tests.yml) -- [Epoch Tests](../.github/workflows/epoch-tests.yml) -- [Slow Tests](../.github/workflows/slow-tests.yml) -- [Binaries built for specified architectures](../.github/workflows/create-source-binary.yml) - - Archive and checksum files added to github release -- [Github release](../.github/workflows/github-release.yml) (with artifacts/checksum) is created using the manually input tag -- [Docker image](../.github/workflows/image-build-binary.yml) built from binaries on debian/alpine distributions and pushed with the provided input tag and `latest` -- ex: - - `stacks-core:2.1.0.0.0-debian` - - `stacks-core:latest-debian` - - `stacks-core:2.1.0.0.0` - - `stacks-core:latest` - -## Mutation Testing - -When a new Pull Request (PR) is submitted, this feature evaluates the quality of the tests added or modified in the PR. -It checks the new and altered functions through mutation testing. -Mutation testing involves making small changes (mutations) to the code to check if the tests can detect these changes. - -The mutations are run with or without a [Github Actions matrix](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs). -The matrix is used when there is a large number of mutations to run ([check doc specific cases](https://github.com/stacks-network/actions/blob/main/stacks-core/mutation-testing/check-packages-and-shards/README.md#outputs)). -We utilize a matrix strategy with shards to enable parallel execution in GitHub Actions. -This approach allows for the concurrent execution of multiple jobs across various runners. -The total workload is divided across all shards, effectively reducing the overall duration of a workflow because the time taken is approximately the total time divided by the number of shards (+ initial build & test time). -This is particularly advantageous for large packages that have significant build and test times, as it enhances efficiency and speeds up the process. - -Since mutation testing is directly correlated to the written tests, there are slower packages (due to the quantity or time it takes to run the tests) like `stackslib` or `stacks-node`. -These mutations are run separately from the others, with one or more parallel jobs, depending on the amount of mutations found. - -Once all the jobs have finished testing mutants, the last job collects all the tested mutations from the previous jobs, combines them and outputs them to the `Summary` section of the workflow, at the bottom of the page. -There, you can find all mutants on categories, with links to the function they tested, and a short description on how to fix the issue. -The PR should only be approved/merged after all the mutants tested are in the `Caught` category. - -### Time required to run the workflow based on mutants outcome and packages' size - -- Small packages typically completed in under 30 minutes, aided by the use of shards. -- Large packages like stackslib and stacks-node initially required about 20-25 minutes for build and test processes. - - Each "missed" and "caught" mutant took approximately 15 minutes. Using shards, this meant about 50-55 minutes for processing around 32 mutants (10-16 functions modified). Every additional 8 mutants added another 15 minutes to the runtime. - - "Unviable" mutants, which are functions lacking a Default implementation for their returned struct type, took less than a minute each. - - "Timeout" mutants typically required more time. However, these should be marked to be skipped (by adding a skip flag to their header) since they indicate functions unable to proceed in their test workflow with mutated values, as opposed to the original implementations. - -File: - -- [PR Differences Mutants](../.github/workflows/pr-differences-mutants.yml) - -### Mutant Outcomes - -- caught — A test failed with this mutant applied. -This is a good sign about test coverage. - -- missed — No test failed with this mutation applied, which seems to indicate a gap in test coverage. -Or, it may be that the mutant is undistinguishable from the correct code. -In any case, you may wish to add a better test. - -- unviable — The attempted mutation doesn't compile. -This is inconclusive about test coverage, since the function's return structure may not implement `Default::default()` (one of the mutations applied), hence causing the compile to fail. -It is recommended to add `Default` implementation for the return structures of these functions, only mark that the function should be skipped as a last resort. - -- timeout — The mutation caused the test suite to run for a long time, until it was eventually killed. -You might want to investigate the cause and only mark the function to be skipped if necessary. - -### Skipping Mutations - -Some functions may be inherently hard to cover with tests, for example if: - -- Generated mutants cause tests to hang. -- You've chosen to test the functionality by human inspection or some higher-level integration tests. -- The function has side effects or performance characteristics that are hard to test. -- You've decided that the function is not important to test. - -To mark functions as skipped, so they are not mutated: - -- Add a Cargo dependency of the [mutants](https://crates.io/crates/mutants) crate, version `0.0.3` or later (this must be a regular `dependency`, not a `dev-dependency`, because the annotation will be on non-test code) and mark functions with `#[mutants::skip]`, or - -- You can avoid adding the dependency by using the slightly longer `#[cfg_attr(test, mutants::skip)]`. - -### Example - -```rust -use std::time::{Duration, Instant}; - -/// Returns true if the program should stop -#[cfg_attr(test, mutants::skip)] // Returning false would cause a hang -fn should_stop() -> bool { - true -} - -pub fn controlled_loop() { - let start = Instant::now(); - for i in 0.. { - println!("{}", i); - if should_stop() { - break; - } - if start.elapsed() > Duration::from_secs(60 * 5) { - panic!("timed out"); - } - } -} - -mod test { - #[test] - fn controlled_loop_terminates() { - super::controlled_loop() - } -} -``` - ---- diff --git a/docs/ci-workflow.md b/docs/ci-workflow.md new file mode 100644 index 0000000000..0b1ed2b170 --- /dev/null +++ b/docs/ci-workflow.md @@ -0,0 +1,227 @@ +# CI Workflow + +All releases are built via a Github Actions workflow named [`CI`](../.github/workflows/ci.yml), and is responsible for: + +- Verifying code is formatted correctly +- Integration tests +- Unit tests +- [Mutation tests](https://en.wikipedia.org/wiki/Mutation_testing) +- Creating releases + - Building binary archives and calculating checksums + - Publishing Docker images + +1. Releases are only created when the [CI workflow](../.github/workflows/ci.yml) is triggered against a release branch (ex: `release/X.Y.Z.A.n`, or `release/signer-X.Y.Z.A.n.x`). +2. [Caching](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows) is used to speed up testing - a cache is created based on the type of data (i.e. cargo) and the commit sha. + Tests can be retried quickly since the cache will persist until the cleanup job is run or the cache is evicted. +3. [Nextest](https://nexte.st/) is used to run the tests from a cached build archive file (using commit sha as the cache key). + - Two [test archives](https://nexte.st/docs/ci-features/archiving/) are created, one for genesis tests and one for non-genesis tests. + - Unit-tests are [partitioned](https://nexte.st/docs/ci-features/partitioning/) and parallelized to speed up execution time. +4. Most workflow steps are called from a separate actions repo to enforce DRY. + +## TL;DR + +- Pushing a new branch will not trigger a workflow +- A PR that is opened/re-opened/synchronized will produce an amd64 docker image built from source on Debian with glibc with the following tags: + - `stacks-core:` + - `stacks-core:` +- An untagged build of any branch will produce a single image built from source on Debian with glibc: + - `stacks-core:` +- Running the [CI workflow](../.github/workflows/ci.yml) on a `release/X.Y.Z.A.n` branch will produce: + - Github Release of the branch with: + - Binary archives for several architectures + - Checksum file containing hashes for each archive + - Git tag of the `release/X.Y.Z.A.n` version, in the format of: `X.Y.Z.A.n` + - Docker Debian images for several architectures tagged with: + - `stacks-core:latest` + - `stacks-core:X.Y.Z.A.n` + - `stacks-core:X.Y.Z.A.n-debian` + - Docker Alpine images for several architectures tagged with: + - `stacks-core:X.Y.Z.A.n-alpine` + +## Release workflow + +The process to build and tag a release is defined [here](./release-process.md) + +## Tests + +Tests are separated into several different workflows, with the intention that they can be _conditionally_ run depending upon the triggering operation. For example, when a PR is opened we don't want to run some identified "slow" tests, but we do want to run the [Stacks Core Tests](../.github/workflows/stacks-core-tests.yml) and [Bitcoin Tests](../.github/workflows/bitcoin-tests.yml). + +There are also 2 different methods in use with regard to running tests: + +1. [Github Actions matrix](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs) +2. [nextest partitioning](https://nexte.st/book/partitioning.html) + +A matrix is used when there are several known tests that need to be run in parallel. +Partitions (shards) are used when there is a large and unknown number of tests to run (ex: `cargo test` to run all tests). + +There is also a workflow designed to run tests that is manually triggered: [Standalone Tests](../.github/workflows/standalone-tests.yml). +This workflow requires you to select which test(s) you want to run, which then triggers a reusable workflow via conditional. +For example, selecting `Epoch Tests` will run the tests defined in [Epoch Tests](../.github/workflows/epoch-tests.yml). +Likewise, selecting `Release Tests` will run the same tests as a release workflow. + +### Adding/changing tests + +With the exception of `unit-tests` in [Stacks Core Tests](../.github/workflows/stacks-core-tests.yml), adding/removing a test requires a change to the workflow matrix. Example from [Atlas Tests](../.github/workflows/atlas-tests.yml): + +```yaml +atlas-tests: + name: Atlas Test + ... + matrix: + test-name: + - tests::neon_integrations::atlas_integration_test + - tests::neon_integrations::atlas_stress_integration_test +``` + +Example of adding a new test `tests::neon_integrations::atlas_new_test`: + +```yaml +atlas-tests: + name: Atlas Test + ... + matrix: + test-name: + - tests::neon_integrations::atlas_integration_test + - tests::neon_integrations::atlas_stress_integration_test + - tests::neon_integrations::atlas_new_test +``` + +The separation of tests (outside of [Slow Tests](../.github/workflows/slow-tests.yml)) is performed by creating a separate workflow for each _type_ of test that is being run. +Using the example above, to add/remove any tests from being run - the workflow `matrix` will need to be adjusted. + +ex: + +- `Atlas Tests`: Tests related to Atlas +- `Bitcoin Tests`: Tests relating to burnchain operations +- `Epoch Tests`: Tests related to epoch changes +- `P2P Tests`: Tests P2P operations +- `Slow Tests`: These tests have been identified as taking longer than others. The threshold used is if a test takes longer than `10 minutes` to complete successfully (or even times out intermittently), it should be added here. +- `Stacks Core Tests`: + - `full-genesis`: Tests related to full genesis + - `core-contracts`: Tests related to boot contracts + +### Checking the result of multiple tests at once + +The [check-jobs-status](https://github.com/stacks-network/actions/tree/main/check-jobs-status) composite action may be used in order to check that multiple tests are successful in a workflow job. +If any of the tests given to the action (JSON string of `needs` field) fails, the step that calls the action will also fail. + +If you have to mark more than 1 job from the same workflow required in a ruleset, you can use this action in a separate job and only add that job as required. + +In the following example, `unit-tests` is a matrix job from [Stacks Core Tests](../.github/workflows/stacks-core-tests.yml) with 8 partitions (i.e. 8 jobs are running), while the others are normal jobs. +If any of the jobs are failing, the `check-tests` job will also fail. + +```yaml +check-tests: + name: Check Tests + runs-on: ubuntu-latest + if: always() + needs: + - full-genesis + - unit-tests + - open-api-validation + - core-contracts-clarinet-test + steps: + - name: Check Tests Status + id: check_tests_status + uses: stacks-network/actions/check-jobs-status@main + with: + jobs: ${{ toJson(needs) }} + summary_print: "true" +``` + +## Mutation Testing + +When a new Pull Request (PR) is submitted, this feature evaluates the quality of the tests added or modified in the PR. +It checks the new and altered functions through mutation testing. +Mutation testing involves making small changes (mutations) to the code to check if the tests can detect these changes. + +The mutations are run with or without a [Github Actions matrix](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs). +The matrix is used when there is a large number of mutations to run ([check doc specific cases](https://github.com/stacks-network/actions/blob/main/stacks-core/mutation-testing/check-packages-and-shards/README.md#outputs)). +We utilize a matrix strategy with shards to enable parallel execution in GitHub Actions. +This approach allows for the concurrent execution of multiple jobs across various runners. +The total workload is divided across all shards, effectively reducing the overall duration of a workflow because the time taken is approximately the total time divided by the number of shards (+ initial build & test time). +This is particularly advantageous for large packages that have significant build and test times, as it enhances efficiency and speeds up the process. + +Since mutation testing is directly correlated to the written tests, there are slower packages (due to the quantity or time it takes to run the tests) like `stackslib` or `stacks-node`. +These mutations are run separately from the others, with one or more parallel jobs, depending on the amount of mutations found. + +Once all the jobs have finished testing mutants, the last job collects all the tested mutations from the previous jobs, combines them and outputs them to the `Summary` section of the workflow, at the bottom of the page. +There, you can find all mutants on categories, with links to the function they tested, and a short description on how to fix the issue. +The PR should only be approved/merged after all the mutants tested are in the `Caught` category. + +### Time required to run the workflow based on mutants outcome and packages' size + +- Small packages typically completed in under 30 minutes, aided by the use of shards. +- Large packages like stackslib and stacks-node initially required about 20-25 minutes for build and test processes. + - Each "missed" and "caught" mutant took approximately 15 minutes. Using shards, this meant about 50-55 minutes for processing around 32 mutants (10-16 functions modified). Every additional 8 mutants added another 15 minutes to the runtime. + - "Unviable" mutants, which are functions lacking a Default implementation for their returned struct type, took less than a minute each. + - "Timeout" mutants typically required more time. However, these should be marked to be skipped (by adding a skip flag to their header) since they indicate functions unable to proceed in their test workflow with mutated values, as opposed to the original implementations. + +File: + +- [PR Differences Mutants](../.github/workflows/pr-differences-mutants.yml) + +### Mutant Outcomes + +- caught — A test failed with this mutant applied. + This is a good sign about test coverage. + +- missed — No test failed with this mutation applied, which seems to indicate a gap in test coverage. + Or, it may be that the mutant is undistinguishable from the correct code. + In any case, you may wish to add a better test. + +- unviable — The attempted mutation doesn't compile. + This is inconclusive about test coverage, since the function's return structure may not implement `Default::default()` (one of the mutations applied), hence causing the compile to fail. + It is recommended to add `Default` implementation for the return structures of these functions, only mark that the function should be skipped as a last resort. + +- timeout — The mutation caused the test suite to run for a long time, until it was eventually killed. + You might want to investigate the cause and only mark the function to be skipped if necessary. + +### Skipping Mutations + +Some functions may be inherently hard to cover with tests, for example if: + +- Generated mutants cause tests to hang. +- You've chosen to test the functionality by human inspection or some higher-level integration tests. +- The function has side effects or performance characteristics that are hard to test. +- You've decided that the function is not important to test. + +To mark functions as skipped, so they are not mutated: + +- Add a Cargo dependency of the [mutants](https://crates.io/crates/mutants) crate, version `0.0.3` or later (this must be a regular `dependency`, not a `dev-dependency`, because the annotation will be on non-test code) and mark functions with `#[mutants::skip]`, or + +- You can avoid adding the dependency by using the slightly longer `#[cfg_attr(test, mutants::skip)]`. + +### Example + +```rust +use std::time::{Duration, Instant}; + +/// Returns true if the program should stop +#[cfg_attr(test, mutants::skip)] // Returning false would cause a hang +fn should_stop() -> bool { + true +} + +pub fn controlled_loop() { + let start = Instant::now(); + for i in 0.. { + println!("{}", i); + if should_stop() { + break; + } + if start.elapsed() > Duration::from_secs(60 * 5) { + panic!("timed out"); + } + } +} + +mod test { + #[test] + fn controlled_loop_terminates() { + super::controlled_loop() + } +} +``` + +--- diff --git a/docs/community.md b/docs/community.md deleted file mode 100644 index ca842151f2..0000000000 --- a/docs/community.md +++ /dev/null @@ -1,23 +0,0 @@ -# Community - -Beyond this Github project, -Stacks maintains a public [forum](https://forum.stacks.org) and an -open [Discord](https://discord.com/invite/XYdRyhf) channel. In addition, the project -maintains a [mailing list](https://newsletter.stacks.org/) which sends out -community announcements. - -- [Forum](https://forum.stacks.org) -- [Discord](https://discord.com/invite/XYdRyhf) -- [Telegram](https://t.me/StacksChat) -- [Newsletter](https://newsletter.stacks.org/) - -The greater Stacks community regularly hosts in-person -[meetups](https://www.meetup.com/topics/blockstack/) as well as a [calendar of Stacks ecosystem events](https://community.stacks.org/events#calendar). The project's -[YouTube channel](https://www.youtube.com/channel/UC3J2iHnyt2JtOvtGVf_jpHQ) includes -videos from some of these meetups, as well as video tutorials to help new -users get started and help developers wrap their heads around the system's -design. - -- [Meetups](https://www.meetup.com/topics/blockstack/) -- [Events Calender](https://community.stacks.org/events#calendar) -- [YouTube channel](https://www.youtube.com/channel/UC3J2iHnyt2JtOvtGVf_jpHQ) diff --git a/docs/init.md b/docs/init.md index f3b98076c6..5bf157e721 100644 --- a/docs/init.md +++ b/docs/init.md @@ -14,9 +14,8 @@ The MacOS configuration assumes stacks-blockchain will be set up for the current ## Configuration -For an example configuration file that describes the configuration settings, -see [mainnet-follower-conf.toml](../testnet/stacks-node/conf/mainnet-follower-conf.toml). -Available configuration options are documented here: https://docs.stacks.co/references/stacks-node-configuration +For an example configuration file that describes the configuration settings, see [mainnet-follower-conf.toml](../testnet/stacks-node/conf/mainnet-follower-conf.toml). +Available configuration options are [documented here](https://docs.stacks.co/stacks-in-depth/nodes-and-miners/stacks-node-configuration). ## Paths diff --git a/docs/mining.md b/docs/mining.md index e113f12d93..34a299cd1c 100644 --- a/docs/mining.md +++ b/docs/mining.md @@ -1,7 +1,7 @@ # Stacks Mining Stacks tokens (STX) are mined by transferring BTC via PoX. To run as a miner, -you should make sure to add the following config fields to your config file: +you should make sure to add the following config fields to your [config file](../testnet/stacks-node/conf/mainnet-miner-conf.toml): ```toml [node] @@ -9,24 +9,22 @@ you should make sure to add the following config fields to your config file: miner = True # Bitcoin private key to spend seed = "YOUR PRIVATE KEY" -# How long to wait for microblocks to arrive before mining a block to confirm them (in milliseconds) -wait_time_for_microblocks = 10000 # Run as a mock-miner, to test mining without spending BTC. Needs miner=True. #mock_mining = True [miner] -# Smallest allowed tx fee, in microSTX -min_tx_fee = 100 -# Time to spend on the first attempt to make a block, in milliseconds. -# This can be small, so your node gets a block-commit into the Bitcoin mempool early. -first_attempt_time_ms = 1000 -# Time to spend on subsequent attempts to make a block, in milliseconds. -# This can be bigger -- new block-commits will be RBF'ed. -subsequent_attempt_time_ms = 60000 -# Time to spend mining a microblock, in milliseconds. -microblock_attempt_time_ms = 30000 # Time to spend mining a Nakamoto block, in milliseconds. nakamoto_attempt_time_ms = 20000 + +[burnchain] +# Maximum amount (in sats) of "burn commitment" to broadcast for the next block's leader election +burn_fee_cap = 20000 +# Amount (in sats) per byte - Used to calculate the transaction fees +satoshis_per_byte = 25 +# Amount of sats to add when RBF'ing bitcoin tx (default: 5) +rbf_fee_increment = 5 +# Maximum percentage to RBF bitcoin tx (default: 150% of satsv/B) +max_rbf = 150 ``` You can verify that your node is operating as a miner by checking its log output @@ -72,4 +70,4 @@ Estimates are then randomly "fuzzed" using uniform random fuzz of size up to ## Further Reading - [stacksfoundation/miner-docs](https://github.com/stacksfoundation/miner-docs) -- [Mining Documentation](https://docs.stacks.co/docs/nodes-and-miners/miner-mainnet) +- [Mining Documentation](https://docs.stacks.co/stacks-in-depth/nodes-and-miners/mine-mainnet-stacks-tokens) diff --git a/docs/profiling.md b/docs/profiling.md index 832b3d4457..4b8343aae9 100644 --- a/docs/profiling.md +++ b/docs/profiling.md @@ -9,7 +9,7 @@ This document describes several techniques to profile (i.e. find performance bot - generating flame graphs, and - profiling sqlite queries. -Note that all bash commands in this document are run from the stacks-blockchain repository root directory. +Note that all bash commands in this document are run from the [stacks-core repository](https://github.com/stacks-network/stacks-core) root directory. ## Logging tips @@ -17,7 +17,7 @@ Validating the config file using `stacks-node check-config`: ``` $ cargo run -r -p stacks-node --bin stacks-node check-config --config testnet/stacks-node/conf/mainnet-mockminer-conf.toml -INFO [1661276562.220137] [testnet/stacks-node/src/main.rs:82] [main] stacks-node 0.1.0 (tip-mine:c90476aa8a+, release build, macos [aarch64]) +INFO [1661276562.220137] [testnet/stacks-node/src/main.rs:82] [main] stacks-node 0.1.0 (:, release build, linux [x86_64]) INFO [1661276562.220363] [testnet/stacks-node/src/main.rs:115] [main] Loading config at path testnet/stacks-node/conf/mainnet-mockminer-conf.toml INFO [1661276562.233071] [testnet/stacks-node/src/main.rs:128] [main] Valid config! ``` @@ -28,7 +28,7 @@ Enabling debug logging using environment variable `STACKS_LOG_DEBUG=1`: $ STACKS_LOG_DEBUG=1 cargo run -r -p stacks-node --bin stacks-node check-config --config testnet/stacks-node/conf/mainnet-mockminer-conf.toml INFO [1661276562.220137] [testnet/stacks-node/src/main.rs:82] [main] stacks-node 0.1.0 (tip-mine:c90476aa8a+, release build, macos [aarch64]) INFO [1661276562.220363] [testnet/stacks-node/src/main.rs:115] [main] Loading config at path testnet/stacks-node/conf/mainnet-mockminer-conf.toml -DEBG [1661276562.222450] [testnet/stacks-node/src/main.rs:118] [main] Loaded config file: ConfigFile { burnchain: Some(BurnchainConfigFile { chain: Some("bitcoin"), burn_fee_cap: Some(1), mode: Some("mainnet"), commit_anchor_block_within: None, peer_host: Some("bitcoind.stacks.co"), peer_port: Some(8333), rpc_port: Some(8332), rpc_ssl: None, username: Some("blockstack"), password: Some("blockstacksystem"), timeout: None, magic_bytes: None, local_mining_public_key: None, process_exit_at_block_height: None, poll_time_secs: None, satoshis_per_byte: None, leader_key_tx_estimated_size: None, block_commit_tx_estimated_size: None, rbf_fee_increment: None, max_rbf: None, epochs: None }), node: Some(NodeConfigFile { name: None, seed: None, deny_nodes: None, working_dir: Some("/Users/igor/w/stacks-work/working_dir"), rpc_bind: Some("0.0.0.0:20443"), p2p_bind: Some("0.0.0.0:20444"), p2p_address: None, data_url: None, bootstrap_node: Some("02196f005965cebe6ddc3901b7b1cc1aa7a88f305bb8c5893456b8f9a605923893@seed.mainnet.hiro.so:20444"), local_peer_seed: None, miner: Some(true), mock_mining: Some(true), mine_microblocks: None, microblock_frequency: None, max_microblocks: None, wait_time_for_microblocks: None, prometheus_bind: None, marf_cache_strategy: None, marf_defer_hashing: None, pox_sync_sample_secs: None, use_test_genesis_chainstate: None }), ustx_balance: None, events_observer: Some([EventObserverConfigFile { endpoint: "localhost:3700", events_keys: ["*"] }]), connection_options: None, fee_estimation: None, miner: None } +DEBG [1661276562.222450] [testnet/stacks-node/src/main.rs:118] [main] Loaded config file: ConfigFile { burnchain: Some(BurnchainConfigFile { chain: Some("bitcoin"), burn_fee_cap: Some(1), mode: Some("mainnet"), commit_anchor_block_within: None, peer_host: Some("localhost"), peer_port: Some(8333), rpc_port: Some(8332), rpc_ssl: None, username: Some("btcuser"), password: Some("btcpass"), timeout: None, magic_bytes: None, local_mining_public_key: None, process_exit_at_block_height: None, poll_time_secs: None, satoshis_per_byte: None, leader_key_tx_estimated_size: None, block_commit_tx_estimated_size: None, rbf_fee_increment: None, max_rbf: None, epochs: None }), node: Some(NodeConfigFile { name: None, seed: None, deny_nodes: None, working_dir: Some("/Users/igor/w/stacks-work/working_dir"), rpc_bind: Some("0.0.0.0:20443"), p2p_bind: Some("0.0.0.0:20444"), p2p_address: None, data_url: None, bootstrap_node: Some("02196f005965cebe6ddc3901b7b1cc1aa7a88f305bb8c5893456b8f9a605923893@seed.mainnet.hiro.so:20444"), local_peer_seed: None, miner: Some(true), mock_mining: Some(true), mine_microblocks: None, microblock_frequency: None, max_microblocks: None, wait_time_for_microblocks: None, prometheus_bind: None, marf_cache_strategy: None, marf_defer_hashing: None, pox_sync_sample_secs: None, use_test_genesis_chainstate: None }), ustx_balance: None, events_observer: Some([EventObserverConfigFile { endpoint: "localhost:3700", events_keys: ["*"] }]), connection_options: None, fee_estimation: None, miner: None } INFO [1661276562.233071] [testnet/stacks-node/src/main.rs:128] [main] Valid config! ``` diff --git a/docs/release-process.md b/docs/release-process.md index 5e2be08b5d..b96d3d2beb 100644 --- a/docs/release-process.md +++ b/docs/release-process.md @@ -11,18 +11,16 @@ | Linux ARMv7 | _builds are provided but not tested_ | | Linux ARM64 | _builds are provided but not tested_ | - ## Release Schedule and Hotfixes -Normal releases in this repository that add features such as improved RPC endpoints, improved boot-up time, new event -observer fields or event types, etc., are released on a monthly schedule. The currently staged changes for such releases -are in the [develop branch](https://github.com/stacks-network/stacks-core/tree/develop). It is generally safe to run -a `stacks-node` from that branch, though it has received less rigorous testing than release tags. If bugs are found in -the `develop` branch, please do report them as issues on this repository. +Normal releases in this repository that add new features are released on a monthly schedule. +The currently staged changes for such releases are in the [develop branch](https://github.com/stacks-network/stacks-core/tree/develop). +It is generally safe to run a `stacks-node` from that branch, though it has received less rigorous testing than release tags or the [master branch](https://github.com/stacks-network/stacks-core/tree/master). +If bugs are found in the `develop` branch, please do [report them as issues](https://github.com/stacks-network/stacks-core/issues) on this repository. -For fixes that impact the correct functioning or liveness of the network, _hotfixes_ may be issued. These are patches -to the main branch which are backported to the develop branch after merging. These hotfixes are categorized by priority -according to the following rubric: +For fixes that impact the correct functioning or liveness of the network, _hotfixes_ may be issued. +These are patches to the main branch which are backported to the develop branch after merging. +These hotfixes are categorized by priority according to the following rubric: - **High Priority**. Any fix for an issue that could deny service to the network as a whole, e.g., an issue where a particular kind of invalid transaction would cause nodes to stop processing requests or shut down unintentionally. Any fix for an issue that could cause honest miners to produce invalid blocks. - **Medium Priority**. Any fix for an issue that could cause miners to waste funds. @@ -30,90 +28,72 @@ according to the following rubric: ## Versioning -This repository uses a 5 part version number. +This repository uses a 5 part version number: ``` X.Y.Z.A.n -X = 2 and does not change in practice unless there’s another Stacks 2.0 type event +X major version - in practice, this does not change unless there’s another significant network update (e.g. a Stacks 3.0 type of event) Y increments on consensus-breaking changes Z increments on non-consensus-breaking changes that require a fresh chainstate (akin to semantic MAJOR) A increments on non-consensus-breaking changes that do not require a fresh chainstate, but introduce new features (akin to semantic MINOR) n increments on patches and hot-fixes (akin to semantic PATCH) ``` -For example, a node operator running version `2.0.10.0.0` would not need to wipe and refresh their chainstate -to upgrade to `2.0.10.1.0` or `2.0.10.0.1`. However, upgrading to `2.0.11.0.0` would require a new chainstate. +Optionally, an extra pre-release field may be appended to the version to specify a release candidate in the format `-rc[0-9]`. ## Non-Consensus Breaking Release Process -For non-consensus breaking releases, this project uses the following release process: - -1. The release must be timed so that it does not interfere with a _prepare - phase_. The timing of the next Stacking cycle can be found - [here](https://stx.eco/dao/tools?tool=2). A release should happen - at least 24 hours before the start of a new cycle, to avoid interfering - with the prepare phase. So, start by being aware of when the release can - happen. - -1. Before creating the release, the release manager must determine the _version - number_ for this release, and create a release branch in the format: `release/X.Y.Z.A.n`. - The factors that determine the version number are - discussed in [Versioning](#versioning). We assume, in this section, - that the change is not consensus-breaking. So, the release manager must first - determine whether there are any "non-consensus-breaking changes that require a - fresh chainstate". This means, in other words, that the database schema has - changed, but an automatic migration was not implemented. Then, the release manager - should determine whether this is a feature release, as opposed to a hotfix or a - patch. Given the answers to these questions, the version number can be computed. - -1. The release manager enumerates the PRs or issues that would _block_ - the release. A label should be applied to each such issue/PR as - `X.Y.Z.A.n-blocker`. The release manager should ping these - issue/PR owners for updates on whether or not those issues/PRs have - any blockers or are waiting on feedback. - -1. The release manager must update the `CHANGELOG.md` file with summaries what - was `Added`, `Changed`, and `Fixed`. The pull requests merged into `develop` - can be found - [here](https://github.com/stacks-network/stacks-core/pulls?q=is%3Apr+is%3Aclosed+base%3Adevelop+sort%3Aupdated-desc). Note, however, that GitHub apparently does not allow sorting by - _merge time_, so, when sorting by some proxy criterion, some care should - be used to understand which PR's were _merged_ after the last release. - -1. Once the blocker PRs have merged, the release manager will create a new tag - by manually triggering the [`CI` Github Actions workflow](https://github.com/stacks-network/stacks-core/actions/workflows/ci.yml) - against the `release/X.Y.Z.A.n` branch. - -1. Once the release candidate has been built, and docker images, etc. are available, - the release manager will notify various ecosystem participants to test the release - candidate on various staging infrastructure: - - 1. Stacks Foundation staging environments. - 1. Hiro PBC testnet network. - 1. Hiro PBC mainnet mock miner. - - The release candidate should be announced in the `#stacks-core-devs` channel in the - Stacks Discord. For coordinating rollouts on specific infrastructure, the release - manager should contact the above participants directly either through e-mail or - Discord DM. The release manager should also confirm that the built release on the - [Github releases](https://github.com/stacks-network/stacks-core/releases/) - page is marked as `Pre-Release`. - -1. The release manager will test that the release candidate successfully syncs with - the current chain from genesis both in testnet and mainnet. This requires starting - the release candidate with an empty chainstate and confirming that it synchronizes - with the current chain tip. - -1. If bugs or issues emerge from the rollout on staging infrastructure, the release - will be delayed until those regressions are resolved. As regressions are resolved, - additional release candidates should be tagged. The release manager is responsible - for updating the `develop -> master` PR with information about the discovered issues, - even if other community members and developers may be addressing the discovered - issues. - -1. Once the final release candidate has rolled out successfully without issue on staging - infrastructure, the tagged release shall no longer marked as Pre-Release on the [Github releases](https://github.com/stacks-network/stacks-core/releases/) page. - Announcements will then be shared in the `#stacks-core-devs` channel in the - Stacks Discord, as well as the [mailing list](https://groups.google.com/a/stacks.org/g/announce). - -1. Finally, the release branch `release/X.Y.Z.A.n` will be PR'ed into the `master` branch, and once merged, a PR for `master->develop` will be opened. +The release must be timed so that it does not interfere with a _prepare phase_. +The timing of the next Stacking cycle can be found [here](https://stx.eco/dao/tools?tool=2); to avoid interfering with the prepare phase, all releases should happen at least 24 hours before the start of a new cycle. + +1. Before creating the release, the _version number_ must be determined, where the factors that determine the version number are discussed in [Versioning](#versioning). + + - First determine whether there are any "non-consensus-breaking changes that require a fresh chainstate". + - In other words, the database schema has changed, but an automatic migration was not implemented. + - Determine whether this a feature release, as opposed to a hotfix or a patch. + - A new branch in the format `release/X.Y.Z.A.n(-rc[0-9])` is created from the base branch `develop`. + +2. Enumerate PRs and/or issues that would _block_ the release. + + - A label should be applied to each such issue/PR as `X.Y.Z.A.n-blocker`. + +3. Since development is continuing in the `develop` branch, it may be necessary to cherry-pick some commits into the release branch. + + - Create a feature branch from `release/X.Y.Z.A.n`, ex: `feat/X.Y.Z.A.n-pr_number`. + - Add cherry-picked commits to the `feat/X.Y.Z.A.n-pr_number` branch + - Merge `feat/X.Y.Z.A.n-pr_number` into `release/X.Y.Z.A.n`. + +4. Open a PR to update the [CHANGELOG](../CHANGELOG.md) file in the `release/X.Y.Z.A.n` branch. + + - Create a chore branch from `release/X.Y.Z.A.n`, ex: `chore/X.Y.Z.A.n-changelog`. + - Add summaries of all Pull Requests to the `Added`, `Changed` and `Fixed` sections. + + - Pull requests merged into `develop` can be found [here](https://github.com/stacks-network/stacks-core/pulls?q=is%3Apr+is%3Aclosed+base%3Adevelop+sort%3Aupdated-desc). + + **Note**: GitHub does not allow sorting by _merge time_, so, when sorting by some proxy criterion, some care should be used to understand which PR's were _merged_ after the last release. + +5. Once `chore/X.Y.Z.A.n-changelog` has merged, a build may be started by manually triggering the [`CI` workflow](../.github/workflows/ci.yml) against the `release/X.Y.Z.A.n` branch. + +6. Once the release candidate has been built and binaries are available, ecosystem participants shall be notified to test the tagged release on various staging infrastructure. + +7. The release candidate will test that it successfully syncs with the current chain from genesis both in testnet and mainnet. + +8. If bugs or issues emerge from the rollout on staging infrastructure, the release will be delayed until those regressions are resolved. + + - As regressions are resolved, additional release candidates should be tagged. + - Repeat steps 3-7 as necessary. + +9. Once the final release candidate has rolled out successfully without issue on staging infrastructure, the tagged release shall no longer marked as Pre-Release on the [Github releases](https://github.com/stacks-network/stacks-core/releases/) page. + Announcements will then be shared in the `#stacks-core-devs` channel in the Stacks Discord, as well as the [mailing list](https://groups.google.com/a/stacks.org/g/announce). + +10. Finally, the following merges will happen to complete the release process: + - Release branch `release/X.Y.Z.A.n` will be merged into the `master` branch. + - Then, `master` will be merged into `develop`. + +## Consensus Breaking Release Process + +Consensus breaking releases shall follow the same overall process as a non-consensus release, with the following considerations: + +- The release must be timed so that sufficient time is given to perform a genesis sync. +- The release must take into account the activation height at which the new consensus rules will take effect. diff --git a/stacks-signer/CHANGELOG.md b/stacks-signer/CHANGELOG.md index aa2b87deb7..489fd39cf7 100644 --- a/stacks-signer/CHANGELOG.md +++ b/stacks-signer/CHANGELOG.md @@ -11,6 +11,24 @@ and this project adheres to the versioning scheme outlined in the [README.md](RE ### Changed +## [3.0.0.0.0] + +### Added + +- Improved StackerDB message structures +- Improved mock signing during epoch 2.5 +- Include the `stacks-signer` binary version in startup logging and StackerDB messages +- Added a `monitor-signers` CLI command for better visibility into other signers on the network +- Support custom Chain ID in signer configuration +- Refresh the signer's sortition view when it sees a block proposal for a new tenure +- Fixed a race condition where a signer would try to update before StackerDB configuration was set + +### Changed + +- Migrate to new Stacks Node RPC endpoint `/v3/tenures/fork_info/:start/:stop` +- Improved chainstate storage for handling of forks and other state +- Updated prometheus metric labels to reduce high cardinality + ## [2.5.0.0.5.3] ### Added diff --git a/stacks-signer/release-process.md b/stacks-signer/release-process.md index 599d8c7af4..71d47a3e26 100644 --- a/stacks-signer/release-process.md +++ b/stacks-signer/release-process.md @@ -11,27 +11,29 @@ | Linux ARMv7 | _builds are provided but not tested_ | | Linux ARM64 | _builds are provided but not tested_ | - ## Release Schedule and Hotfixes -Normal releases in this repository that add new or updated features shall be released in an ad-hoc manner. The currently staged changes for such releases -are in the [develop branch](https://github.com/stacks-network/stacks-core/tree/develop). It is generally safe to run a `stacks-signer` from that branch, though it has received less rigorous testing than release branches. If bugs are found in the `develop` branch, please do [report them as issues](https://github.com/stacks-network/stacks-core/issues) on this repository. +`stack-signer` releases that add new or updated features shall be released in an ad-hoc manner. +It is generally safe to run a `stacks-signer` from that branch, though it has received less rigorous testing than release branches. +If bugs are found in the `develop` branch, please do [report them as issues](https://github.com/stacks-network/stacks-core/issues) on this repository. For fixes that impact the correct functioning or liveness of the signer, _hotfixes_ may be issued. These hotfixes are categorized by priority according to the following rubric: -- **High Priority**. Any fix for an issue that could deny service to the network as a whole, e.g., an issue where a particular kind of invalid transaction would cause nodes to stop processing requests or shut down unintentionally. -- **Medium Priority**. ny fix for an issue that could deny service to individual nodes. -- **Low Priority**. Any fix for an issue that is not high or medium priority. +- **High Priority**. Any fix for an issue that could deny service to the network as a whole, e.g., an issue where a particular kind of invalid transaction would cause nodes to stop processing requests or shut down unintentionally. +- **Medium Priority**. Any fix for an issue that could deny service to individual nodes. +- **Low Priority**. Any fix for an issue that is not high or medium priority. ## Versioning -This project uses a 6 part version number. When there is a stacks-core release, `stacks-signer` will assume the same version as the tagged `stacks-core` release (5 part version). When there are changes in-between stacks-core releases, the signer binary will assume a 6 part version. +This project uses a 6 part version number. +When there is a stacks-core release, `stacks-signer` will assume the same version as the tagged `stacks-core` release ([5 part version](../docs/release-process.md#versioning)). +When there are changes in-between `stacks-core` releases, the `stacks-signer` binary will assume a 6 part version: ``` X.Y.Z.A.n.x -X = 2 and does not change in practice unless there’s another Stacks 2.0 type event +X major version - in practice, this does not change unless there’s another significant network update (e.g. a Stacks 3.0 type of event) Y increments on consensus-breaking changes Z increments on non-consensus-breaking changes that require a fresh chainstate (akin to semantic MAJOR) A increments on non-consensus-breaking changes that do not require a fresh chainstate, but introduce new features (akin to semantic MINOR) @@ -39,47 +41,49 @@ n increments on patches and hot-fixes (akin to semantic PATCH) x increments on the current stacks-core release version ``` -For example, if there is a stacks-core release of 2.6.0.0.0, `stacks-signer` will also be versioned as 2.6.0.0.0. If a change is needed in the signer, it may be released apart from the stacks-core as version 2.6.0.0.0.1 and will increment until the next stacks-core release. +## Non-Consensus Breaking Release Process + +The release must be timed so that it does not interfere with a _prepare phase_. +The timing of the next Stacking cycle can be found [here](https://stx.eco/dao/tools?tool=2); to avoid interfering with the prepare phase, releases should happen at least 24 hours before the start of a new cycle. + +1. Before creating the release, the _version number_ must be determined, where the factors that determine the version number are discussed in [Versioning](#versioning). + + - First determine whether there are any "non-consensus-breaking changes that require a fresh chainstate". + - In other words, the database schema has changed, but an automatic migration was not implemented. + - Determine whether this a feature release, as opposed to a hotfix or a patch. + - A new branch in the format `release/signer-X.Y.Z.A.n.x` is created from the base branch `develop`. + +2. Enumerate PRs and/or issues that would _block_ the release. + + - A label should be applied to each such issue/PR as `signer-X.Y.Z.A.n.x-blocker`. + +3. Since development is continuing in the `develop` branch, it may be necessary to cherry-pick some commits into the release branch. -## Release Process + - Create a feature branch from `release/signer-X.Y.Z.A.n.x`, ex: `feat/signer-X.Y.Z.A.n.x-pr_number`. + - Add cherry-picked commits to the `feat/signer-X.Y.Z.A.n.x-pr_number` branch + - Merge `feat/signer-X.Y.Z.A.n.x-pr_number` into `release/signer-X.Y.Z.A.n.x`. +4. Open a PR to update the [CHANGELOG](./CHANGELOG.md) file in the `release/signer-X.Y.Z.A.n.x` branch. -1. The release must be timed so that it does not interfere with a _prepare - phase_. The timing of the next Stacking cycle can be found - [here](https://stx.eco/dao/tools?tool=2). A release should happen - at least 48 hours before the start of a new cycle, to avoid interfering - with the prepare phase. + - Create a chore branch from `release/signer-X.Y.Z.A.n.x`, ex: `chore/signer-X.Y.Z.A.n.x-changelog`. + - Add summaries of all Pull Requests to the `Added`, `Changed` and `Fixed` sections. -2. Before creating the release, the release manager must determine the _version - number_ for this release, and create a release branch in the format: `release/signer-X.Y.Z.A.n.x`. - The factors that determine the version number are discussed in [Versioning](#versioning). + - Pull requests merged into `develop` can be found [here](https://github.com/stacks-network/stacks-core/pulls?q=is%3Apr+is%3Aclosed+base%3Adevelop+sort%3Aupdated-desc). -3. _Blocking_ PRs or issues are enumerated and a label should be applied to each - issue/PR such as `signer-X.Y.Z.A.n.x-blocker`. The Issue/PR owners for each should be pinged - for updates on whether or not those issues/PRs have any blockers or are waiting on feedback. - __Note__: It may be necessary to cherry-pick these PR's into the target branch `release/signer-X.Y.Z.A.n.x` + **Note**: GitHub does not allow sorting by _merge time_, so, when sorting by some proxy criterion, some care should be used to understand which PR's were _merged_ after the last release. -4. The [CHANGELOG.md](./CHANGELOG.md) file shall be updated with summaries of what - was `Added`, `Changed`, and `Fixed` in the base branch. For example, pull requests - merged into `develop` can be found [here](https://github.com/stacks-network/stacks-blockchain/pulls?q=is%3Apr+is%3Aclosed+base%3Adevelop+sort%3Aupdated-desc). - Note, however, that GitHub apparently does not allow sorting by _merge time_, - so, when sorting by some proxy criterion, some care should be used to understand - which PR's were _merged_ after the last release. +5. Once `chore/signer-X.Y.Z.A.n.x-changelog` has merged, a build may be started by manually triggering the [`CI` workflow](../.github/workflows/ci.yml) against the `release/signer-X.Y.Z.A.n.x` branch. -5. Once any blocker PRs have merged, a new tag will be created - by manually triggering the [`CI` Github Actions workflow](https://github.com/stacks-network/stacks-core/actions/workflows/ci.yml) - against the `release/signer-X.Y.Z.A.n.x` branch. +6. Once the release candidate has been built and binaries are available, ecosystem participants shall be notified to test the tagged release on various staging infrastructure. -6. Ecosystem participants will be notified of the release candidate in order - to test the release on various staging infrastructure. +7. If bugs or issues emerge from the rollout on staging infrastructure, the release will be delayed until those regressions are resolved. -7. If bugs or issues emerge from the rollout on staging infrastructure, the release - will be delayed until those regressions are resolved. As regressions are resolved, - additional release candidates shall be tagged. + - As regressions are resolved, additional release candidates should be tagged. + - Repeat steps 3-6 as necessary. -8. Once the final release candidate has rolled out successfully without issue on staging - infrastructure, the tagged release shall no longer marked as Pre-Release on the [Github releases](https://github.com/stacks-network/stacks-blockchain/releases/) - page. Announcements will then be shared in the `#stacks-core-devs` channel in the - Stacks Discord, as well as the [mailing list](https://groups.google.com/a/stacks.org/g/announce). +8. Once the final release candidate has rolled out successfully without issue on staging infrastructure, the tagged release shall no longer marked as Pre-Release on the [Github releases](https://github.com/stacks-network/stacks-core/releases/) page. + Announcements will then be shared in the `#stacks-core-devs` channel in the Stacks Discord, as well as the [mailing list](https://groups.google.com/a/stacks.org/g/announce). -9. Finally, the release branch `release/signer-X.Y.Z.A.n.x` will be PR'ed into the `master` branch, and once merged, a PR for `master->develop` will be opened. +9. Finally, the following merges will happen to complete the release process: + - Release branch `release/signer-X.Y.Z.A.n.x` will be merged into the `master` branch. + - Then, `master` will be merged into `develop`. diff --git a/testnet/stacks-node/conf/local-follower-conf.toml b/testnet/stacks-node/conf/local-follower-conf.toml deleted file mode 100644 index 8186b57f54..0000000000 --- a/testnet/stacks-node/conf/local-follower-conf.toml +++ /dev/null @@ -1,46 +0,0 @@ -[node] -rpc_bind = "127.0.0.1:30443" -p2p_bind = "127.0.0.1:30444" -bootstrap_node = "04ee0b1602eb18fef7986887a7e8769a30c9df981d33c8380d255edef003abdcd243a0eb74afdf6740e6c423e62aec631519a24cf5b1d62bf8a3e06ddc695dcb77@127.0.0.1:20444" -pox_sync_sample_secs = 10 -wait_time_for_microblocks = 0 - -[burnchain] -chain = "bitcoin" -mode = "krypton" -peer_host = "127.0.0.1" -rpc_port = 18443 -peer_port = 18444 - -# Used for sending events to a local stacks-blockchain-api service -# [[events_observer]] -# endpoint = "localhost:3700" -# events_keys = ["*"] - -[[ustx_balance]] -# "mnemonic": "point approve language letter cargo rough similar wrap focus edge polar task olympic tobacco cinnamon drop lawn boring sort trade senior screen tiger climb", -# "privateKey": "539e35c740079b79f931036651ad01f76d8fe1496dbd840ba9e62c7e7b355db001", -# "btcAddress": "n1htkoYKuLXzPbkn9avC2DJxt7X85qVNCK", -address = "ST3EQ88S02BXXD0T5ZVT3KW947CRMQ1C6DMQY8H19" -amount = 100000000000000 - -[[ustx_balance]] -# "mnemonic": "laugh capital express view pull vehicle cluster embark service clerk roast glance lumber glove purity project layer lyrics limb junior reduce apple method pear", -# "privateKey": "075754fb099a55e351fe87c68a73951836343865cd52c78ae4c0f6f48e234f3601", -# "btcAddress": "n2ZGZ7Zau2Ca8CLHGh11YRnLw93b4ufsDR", -address = "ST3KCNDSWZSFZCC6BE4VA9AXWXC9KEB16FBTRK36T" -amount = 100000000000000 - -[[ustx_balance]] -# "mnemonic": "level garlic bean design maximum inhale daring alert case worry gift frequent floor utility crowd twenty burger place time fashion slow produce column prepare", -# "privateKey": "374b6734eaff979818c5f1367331c685459b03b1a2053310906d1408dc928a0001", -# "btcAddress": "mhY4cbHAFoXNYvXdt82yobvVuvR6PHeghf", -address = "STB2BWB0K5XZGS3FXVTG3TKS46CQVV66NAK3YVN8" -amount = 100000000000000 - -[[ustx_balance]] -# "mnemonic": "drop guess similar uphold alarm remove fossil riot leaf badge lobster ability mesh parent lawn today student olympic model assault syrup end scorpion lab", -# "privateKey": "26f235698d02803955b7418842affbee600fc308936a7ca48bf5778d1ceef9df01", -# "btcAddress": "mkEDDqbELrKYGUmUbTAyQnmBAEz4V1MAro", -address = "STSTW15D618BSZQB85R058DS46THH86YQQY6XCB7" -amount = 100000000000000 diff --git a/testnet/stacks-node/conf/local-leader-conf.toml b/testnet/stacks-node/conf/local-leader-conf.toml deleted file mode 100644 index 8e10f179d6..0000000000 --- a/testnet/stacks-node/conf/local-leader-conf.toml +++ /dev/null @@ -1,44 +0,0 @@ -[node] -rpc_bind = "127.0.0.1:20443" -p2p_bind = "127.0.0.1:20444" -seed = "0000000000000000000000000000000000000000000000000000000000000000" -local_peer_seed = "0000000000000000000000000000000000000000000000000000000000000000" -miner = true -prometheus_bind = "127.0.0.1:4000" -pox_sync_sample_secs = 10 -wait_time_for_microblocks = 0 - -[burnchain] -chain = "bitcoin" -mode = "krypton" -peer_host = "127.0.0.1" -rpc_port = 18443 -peer_port = 18444 - -[[ustx_balance]] -# "mnemonic": "point approve language letter cargo rough similar wrap focus edge polar task olympic tobacco cinnamon drop lawn boring sort trade senior screen tiger climb", -# "privateKey": "539e35c740079b79f931036651ad01f76d8fe1496dbd840ba9e62c7e7b355db001", -# "btcAddress": "n1htkoYKuLXzPbkn9avC2DJxt7X85qVNCK", -address = "ST3EQ88S02BXXD0T5ZVT3KW947CRMQ1C6DMQY8H19" -amount = 100000000000000 - -[[ustx_balance]] -# "mnemonic": "laugh capital express view pull vehicle cluster embark service clerk roast glance lumber glove purity project layer lyrics limb junior reduce apple method pear", -# "privateKey": "075754fb099a55e351fe87c68a73951836343865cd52c78ae4c0f6f48e234f3601", -# "btcAddress": "n2ZGZ7Zau2Ca8CLHGh11YRnLw93b4ufsDR", -address = "ST3KCNDSWZSFZCC6BE4VA9AXWXC9KEB16FBTRK36T" -amount = 100000000000000 - -[[ustx_balance]] -# "mnemonic": "level garlic bean design maximum inhale daring alert case worry gift frequent floor utility crowd twenty burger place time fashion slow produce column prepare", -# "privateKey": "374b6734eaff979818c5f1367331c685459b03b1a2053310906d1408dc928a0001", -# "btcAddress": "mhY4cbHAFoXNYvXdt82yobvVuvR6PHeghf", -address = "STB2BWB0K5XZGS3FXVTG3TKS46CQVV66NAK3YVN8" -amount = 100000000000000 - -[[ustx_balance]] -# "mnemonic": "drop guess similar uphold alarm remove fossil riot leaf badge lobster ability mesh parent lawn today student olympic model assault syrup end scorpion lab", -# "privateKey": "26f235698d02803955b7418842affbee600fc308936a7ca48bf5778d1ceef9df01", -# "btcAddress": "mkEDDqbELrKYGUmUbTAyQnmBAEz4V1MAro", -address = "STSTW15D618BSZQB85R058DS46THH86YQQY6XCB7" -amount = 100000000000000 diff --git a/testnet/stacks-node/conf/mainnet-follower-conf.toml b/testnet/stacks-node/conf/mainnet-follower-conf.toml index 6f6bab70d8..226fcae806 100644 --- a/testnet/stacks-node/conf/mainnet-follower-conf.toml +++ b/testnet/stacks-node/conf/mainnet-follower-conf.toml @@ -1,19 +1,24 @@ [node] -# working_dir = "/dir/to/save/chainstate" +# working_dir = "/dir/to/save/chainstate" # defaults to: /tmp/stacks-node-[0-9]* rpc_bind = "0.0.0.0:20443" p2p_bind = "0.0.0.0:20444" -bootstrap_node = "02196f005965cebe6ddc3901b7b1cc1aa7a88f305bb8c5893456b8f9a605923893@seed.mainnet.hiro.so:20444,02539449ad94e6e6392d8c1deb2b4e61f80ae2a18964349bc14336d8b903c46a8c@cet.stacksnodes.org:20444,02ececc8ce79b8adf813f13a0255f8ae58d4357309ba0cedd523d9f1a306fcfb79@sgt.stacksnodes.org:20444,0303144ba518fe7a0fb56a8a7d488f950307a4330f146e1e1458fc63fb33defe96@est.stacksnodes.org:20444" +prometheus_bind = "0.0.0.0:9153" [burnchain] -chain = "bitcoin" mode = "mainnet" -peer_host = "bitcoind.stacks.co" -username = "blockstack" -password = "blockstacksystem" -rpc_port = 8332 -peer_port = 8333 +peer_host = "127.0.0.1" # Used for sending events to a local stacks-blockchain-api service # [[events_observer]] # endpoint = "localhost:3700" # events_keys = ["*"] +# timeout_ms = 60_000 + +# Used if running a local stacks-signer service +# [[events_observer]] +# endpoint = "127.0.0.1:30000" +# events_keys = ["stackerdb", "block_proposal", "burn_blocks"] + +# Used if running a local stacks-signer service +# [connection_options] +# auth_token = "" # fill with a unique password diff --git a/testnet/stacks-node/conf/mainnet-miner-conf.toml b/testnet/stacks-node/conf/mainnet-miner-conf.toml index 5b836b01c4..1ecfbc3508 100644 --- a/testnet/stacks-node/conf/mainnet-miner-conf.toml +++ b/testnet/stacks-node/conf/mainnet-miner-conf.toml @@ -1,19 +1,23 @@ [node] -# working_dir = "/dir/to/save/chainstate" -rpc_bind = "0.0.0.0:20443" -p2p_bind = "0.0.0.0:20444" +# working_dir = "/dir/to/save/chainstate" # defaults to: /tmp/stacks-node-[0-9]* +rpc_bind = "127.0.0.1:20443" +p2p_bind = "127.0.0.1:20444" +prometheus_bind = "127.0.0.1:9153" seed = "" local_peer_seed = "" miner = true -bootstrap_node = "02196f005965cebe6ddc3901b7b1cc1aa7a88f305bb8c5893456b8f9a605923893@seed.mainnet.hiro.so:20444,02539449ad94e6e6392d8c1deb2b4e61f80ae2a18964349bc14336d8b903c46a8c@cet.stacksnodes.org:20444,02ececc8ce79b8adf813f13a0255f8ae58d4357309ba0cedd523d9f1a306fcfb79@sgt.stacksnodes.org:20444,0303144ba518fe7a0fb56a8a7d488f950307a4330f146e1e1458fc63fb33defe96@est.stacksnodes.org:20444" +mine_microblocks = false # Disable microblocks (ref: https://github.com/stacks-network/stacks-core/pull/4561 ) [burnchain] -chain = "bitcoin" mode = "mainnet" peer_host = "127.0.0.1" -username = "" -password = "" -rpc_port = 8332 -peer_port = 8333 -satoshis_per_byte = 100 +username = "" +password = "" +# Maximum amount (in sats) of "burn commitment" to broadcast for the next block's leader election burn_fee_cap = 20000 +# Amount (in sats) per byte - Used to calculate the transaction fees +satoshis_per_byte = 25 +# Amount of sats to add when RBF'ing bitcoin tx (default: 5) +rbf_fee_increment = 5 +# Maximum percentage to RBF bitcoin tx (default: 150% of satsv/B) +max_rbf = 150 diff --git a/testnet/stacks-node/conf/mainnet-mockminer-conf.toml b/testnet/stacks-node/conf/mainnet-mockminer-conf.toml index aed3e9874c..9d583d218b 100644 --- a/testnet/stacks-node/conf/mainnet-mockminer-conf.toml +++ b/testnet/stacks-node/conf/mainnet-mockminer-conf.toml @@ -1,17 +1,11 @@ [node] -# working_dir = "/dir/to/save/chainstate" +# working_dir = "/dir/to/save/chainstate" # defaults to: /tmp/stacks-node-[0-9]* rpc_bind = "0.0.0.0:20443" p2p_bind = "0.0.0.0:20444" miner = true mock_mining = true -bootstrap_node = "02196f005965cebe6ddc3901b7b1cc1aa7a88f305bb8c5893456b8f9a605923893@seed.mainnet.hiro.so:20444,02539449ad94e6e6392d8c1deb2b4e61f80ae2a18964349bc14336d8b903c46a8c@cet.stacksnodes.org:20444,02ececc8ce79b8adf813f13a0255f8ae58d4357309ba0cedd523d9f1a306fcfb79@sgt.stacksnodes.org:20444,0303144ba518fe7a0fb56a8a7d488f950307a4330f146e1e1458fc63fb33defe96@est.stacksnodes.org:20444" +prometheus_bind = "0.0.0.0:9153" [burnchain] -chain = "bitcoin" mode = "mainnet" -peer_host = "bitcoind.stacks.co" -username = "blockstack" -password = "blockstacksystem" -rpc_port = 8332 -peer_port = 8333 -burn_fee_cap = 1 +peer_host = "127.0.0.1" diff --git a/testnet/stacks-node/conf/mainnet-signer.toml b/testnet/stacks-node/conf/mainnet-signer.toml new file mode 100644 index 0000000000..8683f076f2 --- /dev/null +++ b/testnet/stacks-node/conf/mainnet-signer.toml @@ -0,0 +1,22 @@ +[node] +# working_dir = "/dir/to/save/chainstate" # defaults to: /tmp/stacks-node-[0-9]* +rpc_bind = "0.0.0.0:20443" +p2p_bind = "0.0.0.0:20444" +prometheus_bind = "0.0.0.0:9153" + +[burnchain] +mode = "mainnet" +peer_host = "127.0.0.1" + +# Used for sending events to a local stacks-blockchain-api service +# [[events_observer]] +# endpoint = "localhost:3700" +# events_keys = ["*"] +# timeout_ms = 60_000 + +[[events_observer]] +endpoint = "127.0.0.1:30000" +events_keys = ["stackerdb", "block_proposal", "burn_blocks"] + +[connection_options] +auth_token = "" # fill with a unique password diff --git a/testnet/stacks-node/conf/mocknet-follower-conf.toml b/testnet/stacks-node/conf/mocknet-follower-conf.toml deleted file mode 100644 index e9a0e7a643..0000000000 --- a/testnet/stacks-node/conf/mocknet-follower-conf.toml +++ /dev/null @@ -1,32 +0,0 @@ -[node] -# working_dir = "/dir/to/save/chainstate" -rpc_bind = "0.0.0.0:20443" -p2p_bind = "0.0.0.0:20444" -bootstrap_node = "04ee0b1602eb18fef7986887a7e8769a30c9df981d33c8380d255edef003abdcd243a0eb74afdf6740e6c423e62aec631519a24cf5b1d62bf8a3e06ddc695dcb77@127.0.0.1:20444" -wait_time_for_microblocks = 10000 -use_test_genesis_chainstate = true - -[burnchain] -chain = "bitcoin" -mode = "mocknet" - -# Used for sending events to a local stacks-blockchain-api service -# [[events_observer]] -# endpoint = "localhost:3700" -# events_keys = ["*"] - -[[ustx_balance]] -address = "ST3EQ88S02BXXD0T5ZVT3KW947CRMQ1C6DMQY8H19" -amount = 100000000000000 - -[[ustx_balance]] -address = "ST3KCNDSWZSFZCC6BE4VA9AXWXC9KEB16FBTRK36T" -amount = 100000000000000 - -[[ustx_balance]] -address = "STB2BWB0K5XZGS3FXVTG3TKS46CQVV66NAK3YVN8" -amount = 100000000000000 - -[[ustx_balance]] -address = "STSTW15D618BSZQB85R058DS46THH86YQQY6XCB7" -amount = 100000000000000 diff --git a/testnet/stacks-node/conf/mocknet-miner-conf.toml b/testnet/stacks-node/conf/mocknet-miner-conf.toml deleted file mode 100644 index 71add782b1..0000000000 --- a/testnet/stacks-node/conf/mocknet-miner-conf.toml +++ /dev/null @@ -1,32 +0,0 @@ -[node] -# working_dir = "/dir/to/save/chainstate" -rpc_bind = "0.0.0.0:20443" -p2p_bind = "0.0.0.0:20444" -seed = "0000000000000000000000000000000000000000000000000000000000000000" -local_peer_seed = "0000000000000000000000000000000000000000000000000000000000000000" -miner = true -wait_time_for_microblocks = 10000 -use_test_genesis_chainstate = true - -[connection_options] -public_ip_address = "127.0.0.1:20444" - -[burnchain] -chain = "bitcoin" -mode = "mocknet" - -[[ustx_balance]] -address = "ST3EQ88S02BXXD0T5ZVT3KW947CRMQ1C6DMQY8H19" -amount = 100000000000000 - -[[ustx_balance]] -address = "ST3KCNDSWZSFZCC6BE4VA9AXWXC9KEB16FBTRK36T" -amount = 100000000000000 - -[[ustx_balance]] -address = "STB2BWB0K5XZGS3FXVTG3TKS46CQVV66NAK3YVN8" -amount = 100000000000000 - -[[ustx_balance]] -address = "STSTW15D618BSZQB85R058DS46THH86YQQY6XCB7" -amount = 100000000000000 diff --git a/testnet/stacks-node/conf/prometheus.yml b/testnet/stacks-node/conf/prometheus.yml deleted file mode 100644 index ad3a063ba7..0000000000 --- a/testnet/stacks-node/conf/prometheus.yml +++ /dev/null @@ -1,13 +0,0 @@ -global: - scrape_interval: 15s - evaluation_interval: 15s -scrape_configs: - - job_name: 'prometheus' - static_configs: - - targets: ['127.0.0.1:9090'] - - job_name: 'stacks-node-leader' - static_configs: - - targets: ['127.0.0.1:4000'] - - job_name: 'stacks-node-follower' - static_configs: - - targets: ['127.0.0.1:5000'] diff --git a/testnet/stacks-node/conf/regtest-follower-conf.toml b/testnet/stacks-node/conf/regtest-follower-conf.toml deleted file mode 100644 index 151446fbaf..0000000000 --- a/testnet/stacks-node/conf/regtest-follower-conf.toml +++ /dev/null @@ -1,36 +0,0 @@ -[node] -# working_dir = "/dir/to/save/chainstate" -rpc_bind = "0.0.0.0:20443" -p2p_bind = "0.0.0.0:20444" -bootstrap_node = "048dd4f26101715853533dee005f0915375854fd5be73405f679c1917a5d4d16aaaf3c4c0d7a9c132a36b8c5fe1287f07dad8c910174d789eb24bdfb5ae26f5f27@regtest.stacks.co:20444" -wait_time_for_microblocks = 10000 - -[burnchain] -chain = "bitcoin" -mode = "krypton" -peer_host = "bitcoind.regtest.stacks.co" -username = "blockstack" -password = "blockstacksystem" -rpc_port = 18443 -peer_port = 18444 - -# Used for sending events to a local stacks-blockchain-api service -# [[events_observer]] -# endpoint = "localhost:3700" -# events_keys = ["*"] - -[[ustx_balance]] -address = "ST2QKZ4FKHAH1NQKYKYAYZPY440FEPK7GZ1R5HBP2" -amount = 10000000000000000 - -[[ustx_balance]] -address = "ST319CF5WV77KYR1H3GT0GZ7B8Q4AQPY42ETP1VPF" -amount = 10000000000000000 - -[[ustx_balance]] -address = "ST221Z6TDTC5E0BYR2V624Q2ST6R0Q71T78WTAX6H" -amount = 10000000000000000 - -[[ustx_balance]] -address = "ST2TFVBMRPS5SSNP98DQKQ5JNB2B6NZM91C4K3P7B" -amount = 10000000000000000 diff --git a/testnet/stacks-node/conf/testnet-follower-conf.toml b/testnet/stacks-node/conf/testnet-follower-conf.toml index 5fe717bfb1..80226c5b89 100644 --- a/testnet/stacks-node/conf/testnet-follower-conf.toml +++ b/testnet/stacks-node/conf/testnet-follower-conf.toml @@ -1,23 +1,31 @@ [node] -# working_dir = "/dir/to/save/chainstate" +# working_dir = "/dir/to/save/chainstate" # defaults to: /tmp/stacks-node-[0-9]* rpc_bind = "0.0.0.0:20443" p2p_bind = "0.0.0.0:20444" bootstrap_node = "029266faff4c8e0ca4f934f34996a96af481df94a89b0c9bd515f3536a95682ddc@seed.testnet.hiro.so:30444" -wait_time_for_microblocks = 10000 +prometheus_bind = "0.0.0.0:9153" [burnchain] -chain = "bitcoin" -mode = "xenon" -peer_host = "bitcoind.testnet.stacks.co" -username = "blockstack" -password = "blockstacksystem" -rpc_port = 18332 -peer_port = 18333 +mode = "krypton" +peer_host = "bitcoin.regtest.hiro.so" +peer_port = 18444 +pox_prepare_length = 100 +pox_reward_length = 900 # Used for sending events to a local stacks-blockchain-api service # [[events_observer]] # endpoint = "localhost:3700" # events_keys = ["*"] +# timeout_ms = 60_000 + +# Used if running a local stacks-signer service +# [[events_observer]] +# endpoint = "127.0.0.1:30000" +# events_keys = ["stackerdb", "block_proposal", "burn_blocks"] + +# Used if running a local stacks-signer service +# [connection_options] +# auth_token = "" # fill with a unique password [[ustx_balance]] address = "ST2QKZ4FKHAH1NQKYKYAYZPY440FEPK7GZ1R5HBP2" @@ -34,3 +42,39 @@ amount = 10000000000000000 [[ustx_balance]] address = "ST2TFVBMRPS5SSNP98DQKQ5JNB2B6NZM91C4K3P7B" amount = 10000000000000000 + +[[burnchain.epochs]] +epoch_name = "1.0" +start_height = 0 + +[[burnchain.epochs]] +epoch_name = "2.0" +start_height = 0 + +[[burnchain.epochs]] +epoch_name = "2.05" +start_height = 1 + +[[burnchain.epochs]] +epoch_name = "2.1" +start_height = 2 + +[[burnchain.epochs]] +epoch_name = "2.2" +start_height = 3 + +[[burnchain.epochs]] +epoch_name = "2.3" +start_height = 4 + +[[burnchain.epochs]] +epoch_name = "2.4" +start_height = 5 + +[[burnchain.epochs]] +epoch_name = "2.5" +start_height = 6 + +[[burnchain.epochs]] +epoch_name = "3.0" +start_height = 56_457 diff --git a/testnet/stacks-node/conf/testnet-miner-conf.toml b/testnet/stacks-node/conf/testnet-miner-conf.toml index ca52b33a23..93455dcee5 100644 --- a/testnet/stacks-node/conf/testnet-miner-conf.toml +++ b/testnet/stacks-node/conf/testnet-miner-conf.toml @@ -1,21 +1,27 @@ [node] -# working_dir = "/dir/to/save/chainstate" +# working_dir = "/dir/to/save/chainstate" # defaults to: /tmp/stacks-node-[0-9]* rpc_bind = "0.0.0.0:20443" p2p_bind = "0.0.0.0:20444" -seed = "" -local_peer_seed = "" -miner = true bootstrap_node = "029266faff4c8e0ca4f934f34996a96af481df94a89b0c9bd515f3536a95682ddc@seed.testnet.hiro.so:30444" -wait_time_for_microblocks = 10000 +prometheus_bind = "0.0.0.0:9153" [burnchain] -chain = "bitcoin" -mode = "xenon" +mode = "krypton" peer_host = "127.0.0.1" -username = "" -password = "" -rpc_port = 18332 -peer_port = 18333 +username = "" +password = "" +rpc_port = 12345 # Bitcoin RPC port +peer_port = 6789 # Bitcoin P2P port +pox_prepare_length = 100 +pox_reward_length = 900 +# Maximum amount (in sats) of "burn commitment" to broadcast for the next block's leader election +burn_fee_cap = 20000 +# Amount (in sats) per byte - Used to calculate the transaction fees +satoshis_per_byte = 25 +# Amount of sats to add when RBF'ing bitcoin tx (default: 5) +rbf_fee_increment = 5 +# Maximum percentage to RBF bitcoin tx (default: 150% of satsv/B) +max_rbf = 150 [[ustx_balance]] address = "ST2QKZ4FKHAH1NQKYKYAYZPY440FEPK7GZ1R5HBP2" @@ -32,3 +38,39 @@ amount = 10000000000000000 [[ustx_balance]] address = "ST2TFVBMRPS5SSNP98DQKQ5JNB2B6NZM91C4K3P7B" amount = 10000000000000000 + +[[burnchain.epochs]] +epoch_name = "1.0" +start_height = 0 + +[[burnchain.epochs]] +epoch_name = "2.0" +start_height = 0 + +[[burnchain.epochs]] +epoch_name = "2.05" +start_height = 1 + +[[burnchain.epochs]] +epoch_name = "2.1" +start_height = 2 + +[[burnchain.epochs]] +epoch_name = "2.2" +start_height = 3 + +[[burnchain.epochs]] +epoch_name = "2.3" +start_height = 4 + +[[burnchain.epochs]] +epoch_name = "2.4" +start_height = 5 + +[[burnchain.epochs]] +epoch_name = "2.5" +start_height = 6 + +[[burnchain.epochs]] +epoch_name = "3.0" +start_height = 56_457 diff --git a/testnet/stacks-node/conf/testnet-signer.toml b/testnet/stacks-node/conf/testnet-signer.toml new file mode 100644 index 0000000000..f4a9bc3b71 --- /dev/null +++ b/testnet/stacks-node/conf/testnet-signer.toml @@ -0,0 +1,78 @@ +[node] +# working_dir = "/dir/to/save/chainstate" # defaults to: /tmp/stacks-node-[0-9]* +rpc_bind = "0.0.0.0:20443" +p2p_bind = "0.0.0.0:20444" +bootstrap_node = "029266faff4c8e0ca4f934f34996a96af481df94a89b0c9bd515f3536a95682ddc@seed.testnet.hiro.so:30444" +prometheus_bind = "0.0.0.0:9153" + +[burnchain] +mode = "krypton" +peer_host = "bitcoin.regtest.hiro.so" +peer_port = 18444 +pox_prepare_length = 100 +pox_reward_length = 900 + +# Used for sending events to a local stacks-blockchain-api service +# [[events_observer]] +# endpoint = "localhost:3700" +# events_keys = ["*"] +# timeout_ms = 60_000 + +[[events_observer]] +endpoint = "127.0.0.1:30000" +events_keys = ["stackerdb", "block_proposal", "burn_blocks"] + +[connection_options] +auth_token = "" # fill with a unique password + +[[ustx_balance]] +address = "ST2QKZ4FKHAH1NQKYKYAYZPY440FEPK7GZ1R5HBP2" +amount = 10000000000000000 + +[[ustx_balance]] +address = "ST319CF5WV77KYR1H3GT0GZ7B8Q4AQPY42ETP1VPF" +amount = 10000000000000000 + +[[ustx_balance]] +address = "ST221Z6TDTC5E0BYR2V624Q2ST6R0Q71T78WTAX6H" +amount = 10000000000000000 + +[[ustx_balance]] +address = "ST2TFVBMRPS5SSNP98DQKQ5JNB2B6NZM91C4K3P7B" +amount = 10000000000000000 + +[[burnchain.epochs]] +epoch_name = "1.0" +start_height = 0 + +[[burnchain.epochs]] +epoch_name = "2.0" +start_height = 0 + +[[burnchain.epochs]] +epoch_name = "2.05" +start_height = 1 + +[[burnchain.epochs]] +epoch_name = "2.1" +start_height = 2 + +[[burnchain.epochs]] +epoch_name = "2.2" +start_height = 3 + +[[burnchain.epochs]] +epoch_name = "2.3" +start_height = 4 + +[[burnchain.epochs]] +epoch_name = "2.4" +start_height = 5 + +[[burnchain.epochs]] +epoch_name = "2.5" +start_height = 6 + +[[burnchain.epochs]] +epoch_name = "3.0" +start_height = 56_457 diff --git a/testnet/stacks-node/src/config.rs b/testnet/stacks-node/src/config.rs index 0658862246..0beed9471d 100644 --- a/testnet/stacks-node/src/config.rs +++ b/testnet/stacks-node/src/config.rs @@ -3033,8 +3033,9 @@ mod tests { if path.is_file() { let file_name = path.file_name().unwrap().to_str().unwrap(); if file_name.ends_with(".toml") { + debug!("Parsing config file: {file_name}"); let _config = ConfigFile::from_path(path.to_str().unwrap()).unwrap(); - debug!("Parsed config file: {}", file_name); + debug!("Parsed config file: {file_name}"); } } }