Hi! I'm so glad that you want to help with the Datastorm project, I really do!
All contributions and work that comes in good faith are welcome. Please, check Datastorm's Code of Conduct before contributing.
...should be reported to the project's issue tracker
You can contribute to Datastorm by:
- Looking and comment at opened issues. Feedback is always welcome!
- Help with code review: (PRs)
- Fixing or improving the docs.
- Reporting bugs
- Giving the project a star if you feel like doing so :)
First, fork Datastorm on Github.
This project uses the Poetry dependency management tool, so you must have it installed.
Then, run make development
. This will:
- Install
datastorm
and its dependencies - Install
pre-commit
hooks
To be able to run the tests, you should either install Docker or the Datastore emulator.
Docker is preferred, as the Makefile
has targets to run the tests under a docker container, but feel free to use the emulator instead.
This repository follows a modified Github workflow.
Differences:
- Branches will follow the naming:
DS-<issue id>-short-descriptive-name
. Example:DS-11-pytests
- Releases will happen when a
0.0.0a0
tag is pushed. This tag will be created manually by the project administrator.
Note: The 0.0.0a0
will be active until it is decided to abandon the alpha state, then it will be 0.0.0
.
Atomic commits are preferred over big ones, and please, write meaningful commit messages.
Commit messages should start with DS-<issue id>:
.
- Unit tests to cover your new code. Ideally, test for all possible branches.
- Integration tests
- E2E tests if deem necessary
- Covers the issue
- Passes code review
You can run all tests with make docker-tests
to spin up a docker-based datastore emulator, or do make tests
to use a local datastore emulator.
Docs are located in the docs
directory and are written in markdown using mkdocs.
You can build and take a look at the docs with make docs
.
Docs are hosted on Read the docs and are built and pushed automatically.