Skip to content

Commit

Permalink
Release 1.0.2 (#50)
Browse files Browse the repository at this point in the history
* Updated synthetic image mirror generation script, created helper function for generating images iin SyntheticImageGenerator class, moved notebooks to new notebooks dir.

* Restored ensure_save_path func back to annotation_utils.py

* Added latency tracking, max images generated field for synthetic image mirror pipeline.

* Added mean synthetic image gen latency print statement

* Update example arg inputs

* Fix imports

* Fixed and reformatted args

* Suppressed TensorFlow warnings, fixed image gen from annotation

* Index and loop  bugfixes

* Index, looping, args logic fixes.

* Add load diffuser function call

* Clear gpu after using synthetic image generator

* Always load from Hugging Face.

* Batch processing for memory optimization. Added optional name field to generate_image in SyntheticImageGenerato for future customization.

* Memory optimizations (saving annotation .jsons to disk), added args for chunking, pm2 examples

* Fix save as json on disk, ensure no hanging reference when gpu is cleared in SyntheticImageGenerator

* Replaced generic DiffusionPipeline with StableDiffusionPipeline that inherits from it. Specified generated image dimensions in diffuser call params.

* convert diffuser to float32 before moving onto cpu, fixed duplicate image count logs

* Added a testing function to save images from real image dataset, changed annotations 'index' field to 'id' for consistency, various data loading and parameter fixes

* Added pipeline for diffusion models to constants.py, and dynamic pipeline loading and image size customization to generation.

* Fixed Hugging Face authentication errors. Added instruction to authenticate with huggingface-cli login

* Fixed all annotations being used to generate mirrors regardless of start and end indices

* Added a new load_and_sort_dataset function to handle Hugging Face dataset rows being ordered by filename string-wise instead of numerically. Added generate_synthetic_images arg and updated dataset naming conventions for parallelization-friendliness. Disabled diffusion pipeline progress bars. Added const for progress updates in terminal.

* Removed extra disable progress bar call. Added ceil import for progress calculation.

* Adjust Hugging Face annotations dataset name

* Reverted annotations dataset name to have data range, now requiring start_index and end_index args.

* Re-removed data range from annotations

* Update 'index' to 'id'

* Fixed loading annotations from Hugging Face and savng specified indices to disk.

* Utils refactored, smaller functions. Added resize arg. Added combine_datasets script to put together all generated splits into one Hugging Face dataset.

* Replace hardcoded name

* Fix fstring

* Fix args

* Fixed typos

* Updated combine_datasets.py to match Hugging Face dataset nomenclature.

* removing unused files

* initial validator forward pytest

* initial ci.yml

* new mock classes for ci workflow

* temporarily removing old version of generate_synthetic_data.py

* rename get_mock_image() -> create_random_image()

* adding test_mock.py

* renaming build -> test step in ci.yml

* test_rewards.py

* parameterizing fake_prob to allow intentionally testing real/synth image flows in vali fwd

* forcing vali fwd through real and synth image flows

* fake_prob -> _fake_prob

* using dot operator to read config in mock vali until I replace namespace cfg with bt.config

* allowing mock code to skip force_register_neuron in the case that the neuron was already registered in previous test instance

* removing unused circleci dir from template repo

* image transforms tests

* fixing setting of mock dentrite process_time

* adding test_mock.py

* reset mock chain state in between test cases

* cleaning up state management for MockSubtensor

* __init__.py

* replacing hardcoded string with random image b64

* Fixed saving synthetic images after resizing.

* new auto update implementation from sn19

* inital self heal script from sn19

* Flag for downloading annotations from HuggingFace

* fixing reference to self.config

* Enforcing no watermarking in all cases

* self heal in autoupdate script

* making autoupdate scripts executable

* self heal restart 6 -> 6.5

* typo

* allowing --no-auto-update and --no-self-heal for validators

* combining run scripts into run_neuron.py

* replacing neuron type with --validator and --miner

* documentation updates for new run script

* docs update

* adding wandb to docs

* Arg for skipping annotation generation

* Prompt truncation for annotations longer than max token length

* Suppress token max exceeded warning, cleaned up error logging

* Removed all tqdm loading bars, cleaned imports, updated fake dataset paths to parquet versions.

* Improved annotation cleanliness with inter-prompt spacing and stripped endings.

* removing fixtures reference from mock.py

* read btcli args from .env

* docs update

* Formatting

* fixing fixtures import

* adding .env file creation to install script

* moving network (test/finney) into .env, reducing script redundancy

* missing netuid arg for MockSubtensor/MockMetagraph inits in test

* adding .env to .gitignore

* AXON_PORT -> MINER_AXON_PORT env var rename

* docs updates to reflect latest run_neuron.py updates

* updating .env paths

* small docs update

* Fixed annotation json filenames not starting with start_idx arg

* locking down version numbers

* Added docstrings and comments

* fixing image_index field for wanbd logging

* try except for wandb init

* adding retries for nan images

* fixing image isnan check by adding np.any

* rename wandb fields *image_id -> *image_name

* Updated failure case for generating annotations.

* Adjusted TF logging level to include error messages. Cleaned up unnecessary imports. Simplified clear_gpu to not moving tensor to CPU.

* Reverted deletion of necessary diffusion pipeline imports. Adjusted TF logging level in dataset generation script to be consistent with synthetic generation classes.

* adding a sleep to reduce metagraph resync freq

* fixing edge case that occurs when only 1 miner has nonzero weight

* bump version to 1.0.2

* fixing download_data extension

* Update fake dataset paths

* replacing conda activate with /home/user/mambaforge/envs/tensorml

---------

Co-authored-by: Benjamin <[email protected]>
Co-authored-by: aliang322 <[email protected]>
  • Loading branch information
3 people authored Aug 19, 2024
1 parent ee149f8 commit 97dd144
Show file tree
Hide file tree
Showing 48 changed files with 1,236 additions and 1,560 deletions.
168 changes: 0 additions & 168 deletions .circleci/config.yml

This file was deleted.

40 changes: 40 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: Continuous Integration

on:
push:
branches: [ "main", "testnet" ]
pull_request:
branches: [ "main", "testnet" ]

permissions:
contents: read

jobs:
test:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v3
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest pytest-asyncio
pip install -r requirements.txt
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
# run tests in tests/ dir and only fail if there are failures or errors
pytest tests/ --verbose --failed-first --exitfirst --disable-warnings
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -162,4 +162,5 @@ cython_debug/
testing/
data/
checkpoints/
.requirements_installed
.requirements_installed
*.env
9 changes: 9 additions & 0 deletions autoupdate_miner_steps.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/bash

# Thank you to Namoray of SN19 for their autoupdate implementation!
# THIS FILE CONTAINS THE STEPS NEEDED TO AUTOMATICALLY UPDATE THE REPO
# THIS FILE ITSELF MAY CHANGE FROM UPDATE TO UPDATE, SO WE CAN DYNAMICALLY FIX ANY ISSUES

echo $CONDA_PREFIX
$CONDA_PREFIX/bin/pip install -e .
echo "Autoupdate steps complete :)"
10 changes: 10 additions & 0 deletions autoupdate_validator_steps.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/bin/bash

# Thank you to Namoray of SN19 for their autoupdate implementation!
# THIS FILE CONTAINS THE STEPS NEEDED TO AUTOMATICALLY UPDATE THE REPO
# THIS FILE ITSELF MAY CHANGE FROM UPDATE TO UPDATE, SO WE CAN DYNAMICALLY FIX ANY ISSUES

echo $CONDA_PREFIX
$CONDA_PREFIX/bin/pip install -e .
$CONDA_PREFIX/bin/python bitmind/download_data.py
echo "Autoupdate steps complete :)"
2 changes: 1 addition & 1 deletion bitmind/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
# DEALINGS IN THE SOFTWARE.


__version__ = "1.0.1"
__version__ = "1.0.2"
version_split = __version__.split(".")
__spec_version__ = (
(1000 * int(version_split[0]))
Expand Down
Empty file removed bitmind/api/__init__.py
Empty file.
44 changes: 0 additions & 44 deletions bitmind/api/dummy.py

This file was deleted.

Loading

0 comments on commit 97dd144

Please sign in to comment.