Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 2.1.1 #141

Merged
merged 94 commits into from
Jan 27, 2025
Merged

Release 2.1.1 #141

merged 94 commits into from
Jan 27, 2025

Conversation

dylanuys
Copy link
Contributor

@dylanuys dylanuys commented Jan 23, 2025

Release 2.1.1

Updates

  • Introduces HunyuanVideo to validator challenges
  • Upgraded to diffusers 0.32.2
  • Added random generation parameter selection for resolution, number of inference steps and number of frames. See bitmind/validator/config.py for configured minimums, maximums, and options for each model.
  • Model-specific FPS settings for saving video outputs
  • Removes score decay for miners unselected uids

Validator Update Process

Important: If you have less than the recommended 600GB storage, you may run out of storage with this upgrade. New recommended storage is 1TB to provide buffer for future releases, rather than requesting incremental upgrades.

  • Autoupdate Validators: No action necessary.
  • Self-managed Validators: Run ./setup_env.sh and restart your validator.
    • The setup script will update your diffusers version.
    • The HunyuanVideo model added in this release will be downloaded when the validator starts back up

Current Validator Storage Overview

  • Total storage used by SN34 is < 450GB including repository, models, cached data and conda environment
  • HunyuanVideotakes up an additional 16GB of space in the Huggingface cache
  • ~/.cache/huggingface (generative models) is now 221GB.
  • ~/.cache/sn34(real and synthetic data) is ~200GB (unchanged)

dylanuys and others added 30 commits November 19, 2024 17:17
* adding rich arg, adding coldkeys and hotokeys

* moving rich to payload from headers

* bump version

---------

Co-authored-by: benliang99 <[email protected]>
Adding two finetuned image models to expand validator challenges
Updated transformers version to fix tokenizer initialization error
* Made gpu id specification consistent across synthetic image generation models

* Changed gpu_id to device

* Docstring grammar

* add neuron.device to SyntheticImageGenerator init

* Fixed variable names

* adding device to start_validator.sh

* deprecating old/biased random prompt generation

* properly clear gpu of moderation pipeline

* simplifying usage of self.device

* fixing moderation pipeline device

* explicitly defining model/tokenizer for moderation pipeline to avoid accelerate auto device management

* deprecating random prompt generation

---------

Co-authored-by: benliang99 <[email protected]>
bump version
* simple video challenge implementation wip

* dummy multimodal miner

* constants reorg

* updating verify_models script with t2v

* fixing MODEL_PIPELINE init

* cleanup

* __init__.py

* hasattr fix

* num_frames must be divisible by 8

* fixing dict iteration

* dummy response for videos

* fixing small bugs

* fixing video logging and compression

* apply image transforms uniformly to frames of video

* transform list of tensor to pil for synapse prep

* cleaning up vali forward

* miner function signatures to use Synapse base class instead of ImageSynapse

* vali requirements imageio and moviepy

* attaching separate video and image forward functions

* separating blacklist and priority fns for image/video synapses

* pred -> prediction

* initial synth video challenge flow

* initial video cache implementation

* video cache cleanup

* video zip downloads

* wip fairly large refactor of data generation, functionality and form

* generalized hf zip download fn

* had claude improve video_cache formatting

* vali forward cleanup

* cleanup + turning back on randomness for real/fake

* fix relative import

* wip moving video datasets to vali config

* Adding optimization flags to vali config

* check if captioning model already loaded

* async SyntheticDataGenerator wip

* async zip download

* ImageCache wip

* proper gpu clearing for moderation pipeline

* sdg cleanup

* new cache system WIP

* image/video cache updates

* cleaning up unused metadata arg, improving logging

* fixed frame sampling, parquet image extraction, image sampling

* synth data cache wip

* Moving sgd to its own pm2 process

* synthetic data gen memory management update

* mochi-1-preview

* util cleanup, new requirements

* ensure SyntheticDataGenerator process waits for ImageCache to populate

* adding new t2i models from main

* Fixing t2v model output saving

* miner cleanup

* Moving tall model weights to bitmind hf org

* removing test video pkl

* fixing circular import

* updating usage of hf_hub_download according to some breaking huggingface_hub changes

* adding ffmpeg to vali reqs

* adding back in video models in async generation after testing

* renaming UCF directory to DFB, since it now contains TALL

* remaining renames for UCF -> DFB

* pyffmpegg

* video compatible data augmentations

* Default values for level, data_aug_params for failure case

* switching image challenges back on

* using sample variable to store data for all challenge types

* disabling sequential_cpu_offload for CogVideoX5b

* logging metadata fields to w&b

* log challenge metadata

* bump version

* adding context manager for generation w different dtypes

* variable name fix in ComposeWithTransforms

* fixing broken DFB stuff in tall_detector.py

* removing unnecessary logging

* fixing outdated variable names

* cache refactor; moving shared functionality to BaseCache

* finally automating w&b project setting

* improving logs

* improving validator forward structure

* detector ABC cleanup + function headers

* adding try except for miner performance history loading

* fixing import

* cleaning up vali logging

* pep8 formatting video_utils

* cleaning up start_validator.sh, starting validator process before data gen

* shortening vali challenge timer

* moving data generation management to its own script & added w&B logging

* run_data_generator.py

* fixing full_path variable name

* changing w&b name for data generator

* yaml > json gang

* simplifying ImageCache.sample to always return one sample

* adding option to skip a challenge if no data are available in cache

* adding config vars for image/video detector

* cleaning up miner class, moving blacklist/priority to base

* updating call to image_cache.sample()

* fixing mochi gen to 84 frames

* fixing video data padding for miners

* updating setup script to create new .env file

* fixing weight loading after detector refactor

* model/detector separation for TALL & modifying base DFB code to allow device configuration

* standardizing video detector input to a frames tensor

* separation of concerns; moving all video preprocessing to detector class

* pep8 cleanup

* reformatting if statements

* temporarily removing initial dataset class

* standardizing config loading across video and image models

* finished VideoDataloader and supporting components

* moved save config file out of trian script

* backwards compatibility for ucf training

* moving data augmentation from RealFakeDataset to Dataset subclasses for video aug support

* cleaning up data augmentation and target_image_size

* import cleanup

* gitignore update

* fixing typos picked up by flake8

* fixing function name ty flake8

* fixing test fixtures

* disabling pytests for now, some are broken after refactor and its 4am
@@ -370,7 +370,17 @@ def update_scores(self, rewards: np.ndarray, uids: List[int]):

# Compute forward pass rewards, assumes uids are mutually exclusive.
# shape: [ metagraph.n ]
scattered_rewards: np.ndarray = np.zeros_like(self.scores)
scattered_rewards: np.ndarray = self.scores.copy()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removing decay by setting scores to previous value

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, has there been any testing of this with simulation?

Copy link
Contributor

@kenobijon kenobijon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good!

Just want to understand any implications of using same score for non-sampled miners. Should we toggle anything else due to potential flattening of scores ?

@@ -370,7 +370,17 @@ def update_scores(self, rewards: np.ndarray, uids: List[int]):

# Compute forward pass rewards, assumes uids are mutually exclusive.
# shape: [ metagraph.n ]
scattered_rewards: np.ndarray = np.zeros_like(self.scores)
scattered_rewards: np.ndarray = self.scores.copy()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, has there been any testing of this with simulation?

@kenobijon
Copy link
Contributor

lgtm

@dylanuys dylanuys merged commit 0696a60 into main Jan 27, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants