Skip to content

Commit

Permalink
Merge pull request #18 from bionumpy/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
ivargr authored May 31, 2024
2 parents 5242e55 + 321b3be commit 2ded689
Show file tree
Hide file tree
Showing 9 changed files with 347 additions and 6 deletions.
66 changes: 66 additions & 0 deletions .github/workflows/asv_benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
name: ASV Benchmarks

on:
push:
branches: [ "dev"]
pull_request:
branches: [ "dev" ]

permissions:
contents: write

jobs:
build:
permissions:
contents: write
runs-on: ubuntu-latest
steps:
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: "3.10"
- name: Run asv
uses: actions/checkout@v3
- run: |
git config --global user.email "[email protected]"
git config --global user.name "github action"
git config pull.rebase false
# merge in gh-pages to get old results before generating new
git fetch
git merge origin/gh-pages --strategy-option ours --allow-unrelated-histories
python -m pip install --upgrade pip
pip install -r requirements_dev.txt
pip install .
- run: pip install asv
- run: asv machine --machine github_actions --yes # give this machine a name so that all results are linked to the same, hostname changes if not
- run: asv run --verbose --machine github_actions # make sure to use the machine name so that it does not use hostname
- run: asv publish --verbose

- name: Save benchmarks results
run: |
#git checkout --orphan benchmarks
#git fetch
# merge in gh-pages to get old results before generating new
#git merge origin/gh-pages --strategy-option ours --allow-unrelated-histories
mkdir -p docs/asv
cp -R .asv/html/* docs/asv/
git add -f .asv/results/*
git add -f docs/asv/*
git commit -am 'Benchmark results'
git checkout gh-pages
# merge asv branch into gh-pages, keep asv changes
git merge --strategy-option theirs dev --allow-unrelated-histories
#git pull -s recursive -X ours origin gh-pages
- name: Push results
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
tags: true
force: true
branch: gh-pages

10 changes: 8 additions & 2 deletions .github/workflows/build_sphinx_website.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,18 @@ jobs:
- name: make html & commit the changes
run: |
sphinx-build -b html ./docs_source ./docs
git config user.name "knutrand"
git config user.name "github actions"
git config --global user.email "[email protected]"
git config pull.rebase false
git add -f ./docs
git commit -m 'update docs'
git pull -s recursive -X ours origin gh-pages --allow-unrelated-histories
- name: push changes to gh-pages to show automatically
uses: ad-m/github-push-action@master
with:
branch: gh-pages
force: true
#force: true
2 changes: 1 addition & 1 deletion .github/workflows/bumpversion.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: Bump version
on:
workflow_run:
workflows: [ "Build and test" ]
branches: [ master ]
branches: [ main ]
types:
- completed
jobs:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/push-to-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ on:
# only on successful bumpversion
workflow_run:
workflows: [ "Bump version" ]
branches: [ master ]
branches: [ main ]
types:
- completed

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/python-install-and-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ name: Build and test

on:
push:
branches: [ "master", "dev" ]
branches: [ "main", "dev" ]
pull_request:
branches: [ "master", "dev" ]
branches: [ "main", "dev" ]

permissions:
contents: read
Expand Down
3 changes: 3 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ As such, familiarity with `numpy` is assumed. The simplest way to construct a `R
>>> ra = RaggedArray([[1, 2], [4, 1, 3, 7], [9], [8, 7, 3, 4]])

A `RaggedArray` can be indexed much like a `numpy` array::

>>> ra[1]
array([4, 1, 3, 7])
>>> ra[1, 3]
Expand All @@ -51,6 +52,7 @@ A `RaggedArray` can be indexed much like a `numpy` array::
RaggedArray([[2, 2], [10, 10, 10, 10], [3], [5, 5, 5, 5]])

`numpy ufuncs` can be applied to `RaggedArray` objects::

>>> ra + 1
RaggedArray([[2, 3], [5, 2, 4, 8], [10], [9, 8, 4, 5]])
>>> ra*2
Expand All @@ -61,6 +63,7 @@ A `RaggedArray` can be indexed much like a `numpy` array::
RaggedArray([[-1, -2], [-4, -1, -3, -7], [-9], [-8, -7, -3, -4]])

Some `numpy` functions can be applied to `RaggedArray` objects::

>>> import numpy as np
>>> ra = RaggedArray([[1, 2], [4, 1, 3, 7], [9], [8, 7, 3, 4]])
>>> np.concatenate((ra, ra*10))
Expand Down
198 changes: 198 additions & 0 deletions asv.conf.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
{
// The version of the config file format. Do not change, unless
// you know what you are doing.
"version": 1,

// The name of the project being benchmarked
"project": "project",

// The project's homepage
"project_url": "https://github.com/npstructures",

// The URL or local path of the source code repository for the
// project being benchmarked
"repo": "https://github.com/bionumpy/npstructures.git",

// The Python project's subdirectory in your repo. If missing or
// the empty string, the project is assumed to be located at the root
// of the repository.
"repo_subdir": ".",

// Customizable commands for building the project.
// See asv.conf.json documentation.
// To build the package using pyproject.toml (PEP518), uncomment the following lines
// "build_command": [
// "python -m pip install build",
// "python -m build",
// "python -mpip wheel -w {build_cache_dir} {build_dir}"
// ],
// To build the package using setuptools and a setup.py file, uncomment the following lines
//"build_command": [
// "python setup.py build",
// "python -mpip wheel -w {build_cache_dir} {build_dir}"
//],
"build_command": [
"python -m pip install build",
"python -m build --wheel -o {build_cache_dir} {build_dir}",
],

// Customizable commands for installing and uninstalling the project.
// See asv.conf.json documentation.
// "install_command": ["in-dir={env_dir} python -mpip install {wheel_file}"],
// "uninstall_command": ["return-code=any python -mpip uninstall -y {project}"],

// List of branches to benchmark. If not provided, defaults to "main"
// (for git) or "default" (for mercurial).
"branches": ["dev"], // for git
// "branches": ["default"], // for mercurial

// The DVCS being used. If not set, it will be automatically
// determined from "repo" by looking at the protocol in the URL
// (if remote), or by looking for special directories, such as
// ".git" (if local).
// "dvcs": "git",

// The tool to use to create environments. May be "conda",
// "virtualenv", "mamba" (above 3.8)
// or other value depending on the plugins in use.
// If missing or the empty string, the tool will be automatically
// determined by looking for tools on the PATH environment
// variable.
"environment_type": "virtualenv",

// timeout in seconds for installing any dependencies in environment
// defaults to 10 min
//"install_timeout": 600,

// the base URL to show a commit for the project.
"show_commit_url": "http://github.com/bionumpy/npstructures/commit/",

// The Pythons you'd like to test against. If not provided, defaults
// to the current version of Python used to run `asv`.
// "pythons": ["3.8", "3.12"],

// The list of conda channel names to be searched for benchmark
// dependency packages in the specified order
// "conda_channels": ["conda-forge", "defaults"],

// A conda environment file that is used for environment creation.
// "conda_environment_file": "environment.yml",

// The matrix of dependencies to test. Each key of the "req"
// requirements dictionary is the name of a package (in PyPI) and
// the values are version numbers. An empty list or empty string
// indicates to just test against the default (latest)
// version. null indicates that the package is to not be
// installed. If the package to be tested is only available from
// PyPi, and the 'environment_type' is conda, then you can preface
// the package name by 'pip+', and the package will be installed
// via pip (with all the conda available packages installed first,
// followed by the pip installed packages).
//
// The ``@env`` and ``@env_nobuild`` keys contain the matrix of
// environment variables to pass to build and benchmark commands.
// An environment will be created for every combination of the
// cartesian product of the "@env" variables in this matrix.
// Variables in "@env_nobuild" will be passed to every environment
// during the benchmark phase, but will not trigger creation of
// new environments. A value of ``null`` means that the variable
// will not be set for the current combination.
//
// "matrix": {
// "req": {
// "numpy": ["1.6", "1.7"],
// "six": ["", null], // test with and without six installed
// "pip+emcee": [""] // emcee is only available for install with pip.
// },
// "env": {"ENV_VAR_1": ["val1", "val2"]},
// "env_nobuild": {"ENV_VAR_2": ["val3", null]},
// },

// Combinations of libraries/python versions can be excluded/included
// from the set to test. Each entry is a dictionary containing additional
// key-value pairs to include/exclude.
//
// An exclude entry excludes entries where all values match. The
// values are regexps that should match the whole string.
//
// An include entry adds an environment. Only the packages listed
// are installed. The 'python' key is required. The exclude rules
// do not apply to includes.
//
// In addition to package names, the following keys are available:
//
// - python
// Python version, as in the *pythons* variable above.
// - environment_type
// Environment type, as above.
// - sys_platform
// Platform, as in sys.platform. Possible values for the common
// cases: 'linux2', 'win32', 'cygwin', 'darwin'.
// - req
// Required packages
// - env
// Environment variables
// - env_nobuild
// Non-build environment variables
//
// "exclude": [
// {"python": "3.2", "sys_platform": "win32"}, // skip py3.2 on windows
// {"environment_type": "conda", "req": {"six": null}}, // don't run without six on conda
// {"env": {"ENV_VAR_1": "val2"}}, // skip val2 for ENV_VAR_1
// ],
//
// "include": [
// // additional env for python3.12
// {"python": "3.12", "req": {"numpy": "1.26"}, "env_nobuild": {"FOO": "123"}},
// // additional env if run on windows+conda
// {"platform": "win32", "environment_type": "conda", "python": "3.12", "req": {"libpython": ""}},
// ],

// The directory (relative to the current directory) that benchmarks are
// stored in. If not provided, defaults to "benchmarks"
// "benchmark_dir": "benchmarks",

// The directory (relative to the current directory) to cache the Python
// environments in. If not provided, defaults to "env"
"env_dir": ".asv/env",

// The directory (relative to the current directory) that raw benchmark
// results are stored in. If not provided, defaults to "results".
"results_dir": ".asv/results",

// The directory (relative to the current directory) that the html tree
// should be written to. If not provided, defaults to "html".
"html_dir": ".asv/html",

// The number of characters to retain in the commit hashes.
// "hash_length": 8,

// `asv` will cache results of the recent builds in each
// environment, making them faster to install next time. This is
// the number of builds to keep, per environment.
// "build_cache_size": 2,

// The commits after which the regression search in `asv publish`
// should start looking for regressions. Dictionary whose keys are
// regexps matching to benchmark names, and values corresponding to
// the commit (exclusive) after which to start looking for
// regressions. The default is to start from the first commit
// with results. If the commit is `null`, regression detection is
// skipped for the matching benchmark.
//
// "regressions_first_commits": {
// "some_benchmark": "352cdf", // Consider regressions only after this commit
// "another_benchmark": null, // Skip regression detection altogether
// },

// The thresholds for relative change in results, after which `asv
// publish` starts reporting regressions. Dictionary of the same
// form as in ``regressions_first_commits``, with values
// indicating the thresholds. If multiple entries match, the
// maximum is taken. If no entry matches, the default is 5%.
//
// "regressions_thresholds": {
// "some_benchmark": 0.01, // Threshold of 1%
// "another_benchmark": 0.5, // Threshold of 50%
// },
}
Empty file added benchmarks/__init__.py
Empty file.
Loading

0 comments on commit 2ded689

Please sign in to comment.