Skip to content

Commit

Permalink
Prepare release v0.1.0 (#6)
Browse files Browse the repository at this point in the history
* add changelog

* update diagram

* add dev notes to README

* add pypi deploy cicd workflow

* ensure tests run on main
  • Loading branch information
leifdenby authored May 22, 2024
1 parent 157f295 commit e1cf669
Show file tree
Hide file tree
Showing 5 changed files with 69 additions and 5 deletions.
24 changes: 24 additions & 0 deletions .github/workflows/ci-pypi-deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: package-release

on:
workflow_dispatch:
pull_request:
push:
branches:
- main
release:
types:
- published

jobs:
build:
name: build and upload release to pypi
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: casperdcl/deploy-pypi@v2
with:
password: ${{ secrets.PYPI_TOKEN }}
pip: wheel -w dist/ --no-deps .
upload: ${{ github.event_name == 'release' && github.event.action == 'published' }}
4 changes: 2 additions & 2 deletions .github/workflows/python-package-pip.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ name: pytest
on:
push:
branches:
- master
- main
pull_request:
branches:
- master
- main

jobs:
test:
Expand Down
16 changes: 16 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [v0.1.0](https://github.com/mllam/mllam-data-prep/releases/tag/v0.1.0)

First tagged release of `mllam-data-prep` which includes functionality to
declaratively (in a yaml-config file) describe how the variables and
coordinates of a set of zarr-based source datasets are mapped to a new set of
variables with new coordinates to single a training dataset and write this
resulting single dataset to a new zarr dataset. This explicit mapping gives the
flexibility to target different different model architectures (which may
require different inputs with different shapes between architectures).
30 changes: 27 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,38 @@ The full configuration file specification is given in [mllam_data_prep/config/sp

## Installation

The easiest way to install the package is to clone the repository and install it using pip:
To simply use `mllam-data-prep` you can install the most recent tagged version from pypi with pip:

```bash
git clone https://github.com/mllam/mllam-data-prep
python -m pip install mllam-data-prep
```

## Developing `mllam-data-prep`

To work on developing `mllam-data-prep` it easiest to install and manage the dependencies with [pdm](https://pdm.fming.dev/). To get started clone your fork of [the main repo](https://github.com/mllam/mllam-data-prep) locally:

```bash
git clone https://github.com/<your-github-username>/mllam-data-prep
cd mllam-data-prep
pip install .
```

Use pdm to create and use a virtualenv:

```bash
pdm venv create
pdm use --venv in-project
pdm install
```

All the linting is handelled by `pre-commit` which can be setup to automatically be run on each `git commit` by installing the git commit hook:

```bash
pdm run pre-commit install
```

The branch, commit, push and make a pull-request :)


## Usage

The package is designed to be used as a command-line tool. The main command is `mllam-data-prep` which takes a configuration file as input and outputs a training dataset in the form of a `.zarr` dataset named from the config file (e.g. `example.danra.yaml` produces `example.danra.zarr`).
Expand Down
Binary file modified docs/processing_diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit e1cf669

Please sign in to comment.