Skip to content

Commit

Permalink
Add readme, contribution guidelines and local environment setup guide (
Browse files Browse the repository at this point in the history
…foundry-rs#193)

Closes

---------

Co-authored-by: Maksymilian Demitraszek <[email protected]>
Co-authored-by: Tomasz Rejowski <[email protected]>
  • Loading branch information
3 people authored Jul 18, 2023
1 parent 81bd6dd commit a2669b7
Show file tree
Hide file tree
Showing 22 changed files with 1,365 additions and 29 deletions.
62 changes: 62 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Contribution Guideline

Starknet Foundry is under active development and is open for contributions!
Want to get started?
Grab any [issue](https://github.com/foundry-rs/starknet-foundry/issues) labeled with `good-first-issue`!
Need some guidance?

Reach out to other developers on [Telegram](https://t.me/+d8ULaPxeRqlhMDNk) or open
a [GitHub discussion](https://github.com/foundry-rs/starknet-foundry/discussions)!

### Environment setup

See [development guide](https://foundry-rs.github.io/starknet-foundry/development/environment-setup.html) in Starknet
Foundry book for environment setup.

### Running Tests and Checks

To run tests scripts, you have to install:

- [asdf](https://asdf-vm.com/guide/getting-started.html)
- [starknet-devnet](https://0xspaceshard.github.io/starknet-devnet/docs/intro)

> ⚠️ Make sure you run `./scripts/prepare-for-tests.sh` after setting up the development environment, otherwise the
> tests will fail.
Before creating a contribution, make sure your code passes the following checks

```shell
./scripts/test_forge.sh
./scripts/test_cast.sh
cargo fmt --check
cargo lint
```

Otherwise, it won't be possible to merge your contribution.

## Contributing

Before you open a pull request, it is always a good idea to search
the [issues](https://github.com/foundry-rs/starknet-foundry/issues) and verify if the feature you would like
to add hasn't been already discussed.
We also appreciate creating a feature request before making a contribution, so it can be discussed before you get to
work.

### Writing Tests

Please make sure the feature you are implementing is thoroughly tested with automatic tests.
You can check existing tests in the repository to see the recommended approach to testing.

### Breaking Changes

If the change you are introducing is changing or breaking the behavior of any already existing features, make sure to
include that information in the pull request description.

### Pull Request Size

Try to make your pull request self-contained, only introducing the necessary changes.
If your feature is complicated,
consider splitting the changes into meaningful parts and introducing them as separate pull requests.

While creating large pull requests usually will not prevent them from being merged, it may significantly increase review
time and increase the risk of complicated to resolve merge conflicts.
99 changes: 70 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,70 @@
# Starknet-Foundry 🔨

Blazingly fast implementation of Foundry for developing Starknet contracts designed & developed by ex [Protostar](https://github.com/software-mansion/protostar) team from [Software Mansion](https://github.com/software-mansion/protostar)

## Development

### Environment setup

1. Install the latest [Rust](https://www.rust-lang.org/tools/install) version.
If you already have Rust installed make sure to upgrade it by running
```shell
$ rustup update
```
2. Clone this repository
3. Verify your setup by running [tests](#testing)
4. Build Starknet Foundry
```shell
$ cd ./starknet-foundry && cargo build --bins --release
```

### Testing
Test scripts require you to have asdf installed.
Cast's tests require devenet as well.
Moreover, `./scripts/prepare-for-tests.sh` should be run once after setting up the development environment.

```bash
$ ./scripts/test_forge.sh
$ ./scripts/test_cast.sh
```
# Starknet Foundry 🔨

Blazingly fast toolkit for developing Starknet contracts designed & developed by
ex [Protostar](https://github.com/software-mansion/protostar) team from [Software Mansion](https://swmansion.com) based
on native [Cairo](https://github.com/starkware-libs/cairo) test runner
and [Blockifier](https://github.com/starkware-libs/blockifier), written in Rust 🦀.

Need help getting started with Starknet Foundry? Read the
📖 [Starknet Foundry Book](https://foundry-rs.github.io/starknet-foundry/)!

![Example run](./docs/images/demo-gif/demo.gif)

Starknet Foundry, like its [Ethereum counterpart](https://github.com/foundry-rs/foundry), consists of different modules

- [Forge](https://github.com/foundry-rs/starknet-foundry/tree/master/starknet-foundry/crates/forge): Starknet testing
framework (like Truffle, Hardhat and DappTools but for Starknet).
- [Cast](https://github.com/foundry-rs/starknet-foundry/tree/master/starknet-foundry/crates/cast): All-in-one tool for
interacting with Starknet smart contracts, sending transactions and getting chain data.

## Features

- Fast testing framework `Forge` written in Rust
- High-quality dependency management using [scarb](https://github.com/software-mansion/scarb)
- Intuitive interactions and deployment of Starknet contracts through `Cast`

## Roadmap

Starknet Foundry is under active development! Expect a lot of new features to appear soon! 🔥

- [x] Running tests written in Cairo
- [x] Contract interactions testing
- [x] Interacting with Starknet from command line
- [x] Multicall support
- [ ] Cheatcodes
- [ ] Parallel tests execution
- [ ] Performance improvements
- [ ] Deployment scripts written in Cairo
- [ ] Starknet state forking
- [ ] Advanced debugging utilities
- [ ] L1 ↔ L2 messaging and cross-chain testing
- [ ] Transactions profiling
- [ ] Fuzz testing
- [ ] Test coverage reports

## Performance

Forge achieves performance comparable to the Cairo Test Runner with improved user experience. All that is possible on just a single thread and multithreading is well on it's way!

![Starknet test framework speed comparison](./benchmarks/plot.png)

To learn more about our benchmark methodology check [here](./benchmarks/).

## Getting Help

You haven't found your answer to your question in
the [Starknet Foundry Book](https://foundry-rs.github.io/starknet-foundry/)?

- Join the [Telegram](https://t.me/+d8ULaPxeRqlhMDNk) group to get help
- Open a [GitHub discussion](https://github.com/foundry-rs/starknet-foundry/discussions) with your question
- Join the [Starknet Discord](https://discord.com/invite/qypnmzkhbc)

Found a bug? Open an [issue](https://github.com/foundry-rs/starknet-foundry/issues).

## Contributions

Starknet Foundry is under active development, and we appreciate any help from the community! Want to contribute? Read
the [contribution guidelines](./CONTRIBUTING.md).

Check out [development guide](https://foundry-rs.github.io/starknet-foundry/development/environment-setup.html) for
local environment setup guide.
1 change: 1 addition & 0 deletions benchmarks/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.csv
14 changes: 14 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
## Benchmarks

To run benchmarks use:

```
python3 benchmark.py
```
(requires [Protostar](https://docs.swmansion.com/protostar/), [Scarb](https://docs.swmansion.com/scarb) and [Forge](../) installed)


To later plot the data run
```
python3 plot.py
```
99 changes: 99 additions & 0 deletions benchmarks/benchmark.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
from time import perf_counter
import shutil
import pandas as pd
import tempfile
from pathlib import Path
from contextlib import contextmanager
from distutils.dir_util import copy_tree
import subprocess

TOOLCHAINS = [
# name, command, cairo version
("protostar", ["protostar", "test"], 1),
("forge", ["forge"], 2),
("cairo_test", ["scarb", "cairo-test"], 2),
]

# (unit, integration)
TESTS = [(x, x) for x in range(1, 8)]
CASES_PER_UNIT_TEST = 25
CASES_PER_INTEGRATION_TEST = 15


def log(x):
print(f"[BENCHMARK] {x}")


@contextmanager
def benchmark_dir(name: str, cairo_version: int, unit: int, integration: int) -> Path:
with tempfile.TemporaryDirectory() as tmp:
tmp = Path(tmp)
data = Path(__file__).parent / "data"

copy_tree(str(data / "project"), str(tmp))

src = tmp / "src"
tests = tmp / "tests" if name != "cairo_test" else src / "tests"

for i in range(unit):
shutil.copy(
data / "unit_test_template.cairo", tests / f"unit{i}_test.cairo"
)

for i in range(integration):
shutil.copy(
data / f"{name}.cairo",
tests / f"integration{i}_test.cairo",
)

shutil.copy(data / f"hello_starknet_{cairo_version}.cairo", src / "lib.cairo")

if name == "cairo_test":
with open(src / "tests.cairo", "w") as f:
for i in range(unit):
f.write(f"mod unit{i}_test;\n")
for i in range(integration):
f.write(f"mod integration{i}_test;\n")
with open(src / "lib.cairo", "a") as f:
f.write("\n")
f.write("mod tests;\n")

try:
log("Creating test directory")
yield tmp
finally:
pass


def benchmark():
data = {
"n_files": [],
"n_unit": [],
"n_integration": [],
} | {name: [] for name, _, _ in TOOLCHAINS}
for unit, integration in TESTS:
data["n_files"].append(unit + integration)
data["n_unit"].append(unit * CASES_PER_UNIT_TEST)
data["n_integration"].append(integration * CASES_PER_INTEGRATION_TEST)
for name, cmd, ver in TOOLCHAINS:
with benchmark_dir(name, ver, unit, integration) as project_path:
log(f"Running {name}")
start = perf_counter()

subprocess.run(
cmd,
stderr=subprocess.DEVNULL,
stdout=subprocess.PIPE,
check=False,
cwd=project_path,
)

data[name].append(perf_counter() - start)

df = pd.DataFrame(data)
df.to_csv("benchmarks.csv")
print("", df, "", sep="\n")


if __name__ == "__main__":
benchmark()
Loading

0 comments on commit a2669b7

Please sign in to comment.