Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Testnet] Multiclass Rewards #150

Merged
merged 9 commits into from
Feb 18, 2025
Merged

[Testnet] Multiclass Rewards #150

merged 9 commits into from
Feb 18, 2025

Conversation

dylanuys
Copy link
Contributor

@dylanuys dylanuys commented Feb 12, 2025

Note: Miners will have at least a week from when this release is publicly announced to prepare

Introducing mutliclass protocols and rewards.

New reward function takes into account differentiation between synthetic and semi-synthetic data, but places a higher weight on binary (real vs. any type of synthetic) F-1.

  • ImageSynapse and VideoSynapse prediction fields now contain probability vectors for [p_real, p_synthetic, p_semisynthetic]
  • MinerPerformanceTracker updated to track probability vectors, and compute both weighted multiclass F1 and binary F1
  • get_rewards and compute_penalty_multiplier updated to handle probability vectors.

Backwards Compatibility

  • prediction dtype set to Union[float, List[float]] for backwards compatibility.
  • Validator.forward() will transform float predictions p to [1-p, p, 0.] to allow miners making binary classifications to be rewarded based on their binary accuracy
  • When the Validator base class loads the pickled MinerPerformanceTracker object, it will check if it's from the previous version and transform historical float predictions to probability vectors (as in forward())

@@ -393,16 +395,45 @@ def save_miner_history(self):
joblib.dump(self.performance_trackers['video'], self.video_history_cache_path)

def load_miner_history(self):
def convert_v1_to_v2(tracker):
"""Convert a v1 tracker to v2 format"""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

facilitate smooth transition by transforming historical pre-multiclass data

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor suggestion: Docs briefly explaining differences between v1 and v2 formats

Copy link
Contributor

@aliang322 aliang322 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Left a few minor comments.

@@ -393,16 +395,45 @@ def save_miner_history(self):
joblib.dump(self.performance_trackers['video'], self.video_history_cache_path)

def load_miner_history(self):
def convert_v1_to_v2(tracker):
"""Convert a v1 tracker to v2 format"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor suggestion: Docs briefly explaining differences between v1 and v2 formats

new_predictions = deque(maxlen=tracker.store_last_n_predictions)
new_labels = deque(maxlen=tracker.store_last_n_predictions)

for pred, label in zip(tracker.prediction_history[uid], tracker.label_history[uid]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Would it be a safer to check if the old predictions are valid predictions, and if not skip them? (if not 0.0 <= pred <= 1.0: skip). Is it possible that invalid predictions could be recorded?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch, will double check this and add that constraint!

Copy link
Contributor

@kenobijon kenobijon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

besides some testing looks good

@@ -393,16 +395,48 @@ def save_miner_history(self):
joblib.dump(self.performance_trackers['video'], self.video_history_cache_path)

def load_miner_history(self):
def convert_v1_to_v2(tracker):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has this been tested

Comment on lines +119 to +125
if isinstance(p, float):
if p == -1:
return np.array([-1., -1., -1.])
else:
return np.array([1-p, p, 0.])
elif isinstance(p, list):
return np.array(p)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enables both float and list prediction output.

@dylanuys dylanuys merged commit 913ee2a into testnet Feb 18, 2025
1 check passed
dylanuys added a commit that referenced this pull request Feb 20, 2025
* Validator Proxy Response Update (#103)

* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <[email protected]>

* Two new image models: SDXL finetuned on Midjourney, and SD finetuned on anime images

* Added required StableDiffusionPipeline import

* Updated transformers version to fix tokenizer initialization error

* GPU Specification (#108)

* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <[email protected]>

* Update __init__.py

bump version

* removing logging

* old logging removed

* adding check for state file in case it is deleted somehow

* removing remaining random prompt generation code

* [Testnet] Video Challenges V1 (#111)

* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am

* fixing image_size for augmentations

* Updated validator gpu requirements (#113)

* splitting rewards over image and video (#112)

* Update README.md (#110)

* combining requirements files

* Combined requirements installation

* Improved formatting, added checks to prevent overwriting existing .env files.

* Re-added endpoint options

* Fixed incorrect diffusers install

* Fixed missing initialization of miner performance trackers

* [Testnet] Docs Updates (#114)

* docs updates

* mining docs update

* Removed deprecated requirements files from github tests (#118)

* [Testnet] Async Cache Updates (#119)

* breaking out cache updates into their own process

* adding retries for loading vali info

* moving device config to data generation process

* typo

* removing old run_updater init arg, fixing dataset indexing

* only download 1 zip to start to provide data for vali on first boot

* cache deletion functionality

* log cache size

* name images with dataset prefix

* Increased minimum and recommended storage (#120)

* [Testnet] Data download cleanup (#121)

* moving download_data.py to base_miner/datasets

* removing unused args in download_data

* constants -> config

* docs updates for new paths

* updating outdated fn headers

* pep8

* use png codec, sample by framerate + num frames

* fps, min_fps, max_fps parameterization of sample

* return fps and num frames

* Fix registry module imports (#123)

* Fix registry module imports

* Fixing config loading issues

* fixing frame sampling

* bugfix

* print label on testnet

* reenabling model verification

* update detector class names

* Fixing config_name arg for camo

* fixing detector config in camo

* fixing ref to self.config_name

* udpate default frame rate

* vidoe dataset creation example

* default config for video datasets

* update default num_videosg

---------

Co-authored-by: Andrew <[email protected]>

* Update README.md

* README title

* removing samples from cache

* README

* fixing cache removal (#125)

* Fixed tensor not being set to device for video challenges, causing errors when using cuda (#126)

* Mainnet Prep (#127)

* resetting challenge timer to 60s

* fix logging for miner history loading

* randomize model order, log gen time

* remove frame limit

* separate logging to after data check

* generate with batch=1 first for diverse data availability

* load v1 history path for smooth transition to new incentive

* prune extracted cache

* swapping url open-images for jpg

* removing unused config args

* shortening cache refresh timer

* cache optimizations

* typo

* better variable naming

* default to autocast

* log num files in cache along with GB

* surfacing max size gb variables

* cooked typo

* Fixed wrong validation split key string causing no transform to be applied

* Changed detector arg to be required

* fixing hotkey reset check

* removing logline

* clamp mcc at 0 so video doesn't negatively impact performant image miners

* typo

* improving cache logs

* prune after clear

* only update relevant tracker in reward

* improved logging, turned off cache removal in sample()

---------

Co-authored-by: Andrew <[email protected]>

* removign old reqs from autoupdate

* Re-added bitmind HF org prefix to dataset path

* shortening self heal timer

* autoupdate

* autoupdate

* sample size

* Validator Improvements: VRAM usage, logging (#131)

* ensure vali process and cache update process do not consume any vram

* skip challenge if unable to create wandb Image/Video object (indicating corrupt file)

* manually set log level to info

* removing debug print

* enable_info in config

* cleanup

* version bump

* moved info log setting to config.py

* Bittensor 8.5.1 (#133)

* bittensor 8.5.1

* bump package versoin

* Prompt Generation Pipeline Improvements (#135)

* Release 2.0.3 (#134)

Bittensor 8.5.1

* enhancing prompts by adding conveyed motion with llama

* Mining docs fix setup_miner_env.sh -> setup_env.sh

* [testnet] I2i/in painting (#137)

* Initial i2i constants for in-painting

* Initial in painting functionality with mask (oval/rectangle) and annotation generation

* Refactor ipg to match sdg format, added caching and support for selecting from multiple in-painting models

* Fixed cache import, updated test script

* Separate cache for i2i when using run_data_generator

* Renamed synth cache constants, added support for multiple validator synth caches, and selection between i2i (20%) and t2i (80%) in forward

* Unifying InPaintingGenerator and SyntheticDataGenerator (#136)

* WIP, unifying InpaintingGenerator and SyntheticDataGenerator

* minor simplification of forward flow

* simplifying forward flow

* standardizing cache structures with the introduction of task type subdirs

* adding i2i models to batch generation

* removing depracted InPaintingGenerator from run script

* adding --clear-cache option for validator

* updating SDG init params

* fixing last imports + directory structure references

* fixing images passed to generate function for i2i

* option to log masks/original images for i2i challenges

* fixing help hint for output-dir

---------

Co-authored-by: Andrew <[email protected]>

* Updated image_annotation_generator to prompt_generator (#138)

* bump version 2.0.3 -> 2.1.0

* testing cache clearing via autoupdate

* cranking up video rewards to .2

* Add DeepFloyd/IF model and multi-stage pipeline support

Added DeepFloyd/IF-I-XL + IF-II-L model configuration, pipeline_stages configuration for multi-stage models

* Moved multistage pipeline generator to SyntheticDataGenerator

* Args for testing specific model

* [TESTNET] HunyuanVideo (#140)

* hunyuan video initial commit

* delete resolution from from_pretrained_args after extracting h,w

* model_id arg for from_pretrained

* standardizing model_id usage

* fixing autocast and torch_dtype for hunyuan

* adding resolution options and save options for all t2v models

* missing comma in config

* Update __init__.py

* updated subnet arch diagram

* README wip

* docs udpates

* README updates

* README updates

* more README udpates

* README updates

* README udpates

* README cleanup

* more README updates

* Fixing table border removal html for github

* fixing table html

* one last attempt at a prettier table

* one last last attempt at a prettier table

* bumping video rewards

* removing decay for unsampled miners

* README cleanup

* increasing suggested and min compute for validators

* README update, markdown fix in Incentive.md

* README tweak

* removing redundant dereg check from update_scores

* Deepfloyed specific configs, args for better cache/data gen testing, multistage pipeline i/o

* use largest deepfloyed-if I and II models, ensure no watermarker

* Fixed FLUX resolution format, added back model_id and scheduler loading for video models

* Add Janus-Pro-7B t2i model with custom diffuser pipeline class

* Janus repo install

* Removed custom wrapper files, added Janus DiffusionPipeline wrapper to model_utils, cleaned up configs

* Removed DiffusionPipeline import

* Uncomment wandb inits

* Move create_pipeline_generator() to model utils

* Moved model optimizations to model utils

* [Testnet] Mutli-Video Challenges (#148)

* Implementation of frame stitching for 2 videos

* ComposeWithParams fix

* vflip + hflip fix

* wandb video logging fix courtesy of eric

* proper arg passing for prompt moderation

* version bump

* i2i crop guardrails

* Update config.py

Removing problematic resolution for CogVideoX5b

* explicit requirements install

* moving pm2 process stopping prior to model verification

* fix for no available vidoes in multi-video challenge generation

* Update forward.py

Mutli-video threshold 0.2

* [Testnet] Multiclass Rewards (#150)

* multiclass protocols

* multiclass rewards

* facilitating smooth transition from old protocol to multiclass

* DTAO: Bittensor SDK 9.0.0 (#152)

* Update requirements.txt

* version bump

* moving prediction backwards compatibility to synapase.deserialize

* mcc-based reward with rational transform

* cast predictions to np array upon loading miner history

* version bump

* [Testnet] video organics (#151)

* improved vali proxy with video endpoint

* renaming endpoints

* Fixing vali proxy initialization

* make vali proxy async again

* handling testnet situation of low miner activity

* BytesIO import

* upgrading transformers

* switching to multipart form data

* Validator Proxy handling of Multiclass Responses (#153)

* udpate vali proxy to return floats instead of vectors

* removing rational transform for now

* new incentive docs (#154)

* python-multipart

---------

Co-authored-by: benliang99 <[email protected]>
Co-authored-by: Andrew <[email protected]>
Co-authored-by: Kenobi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants