You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-636-gb835f24a
Commit hash: b835f24
Launching Web UI with arguments:
Total VRAM 6144 MB, total RAM 15631 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 6GB Laptop GPU : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
C:\Users\yusuf\OneDrive\Desktop\ForgeUI\system\python\lib\site-packages\transformers\utils\hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: C:\Users\yusuf\OneDrive\Desktop\ForgeUI\webui\models\ControlNetPreprocessor
2025-01-26 02:37:23,975 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\Users\yusuf\OneDrive\Desktop\ForgeUI\webui\models\Stable-diffusion\flux1-dev-bnb-nf4-v2.safetensors', 'hash': 'f0770152'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Startup time: 26.6s (prepare environment: 6.2s, launcher: 0.7s, import torch: 12.2s, initialize shared: 0.3s, other imports: 0.6s, load scripts: 2.6s, create ui: 2.7s, gradio launch: 1.2s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 83.33% GPU memory (5119.00 MB) to load weights, and use 16.67% GPU memory (1024.00 MB) to do matrix computation.
Loading Model: {'checkpoint_info': {'filename': 'C:\Users\yusuf\OneDrive\Desktop\ForgeUI\webui\models\Stable-diffusion\flux1-dev-bnb-nf4-v2.safetensors', 'hash': 'f0770152'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'transformer': 1722, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0}
Using Detected T5 Data Type: torch.float8_e4m3fn
Using Detected UNet Type: nf4
Using pre-quant state dict!
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.bfloat16}
Model loaded in 2.7s (unload existing model: 0.3s, forge model load: 2.5s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
[Unload] Trying to free 7725.00 MB for cuda:0 with 0 models keep loaded ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 5144.00 MB, Model Require: 5154.62 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: -1034.62 MB, CPU Swap Loaded (blocked method): 2094.38 MB, GPU Loaded: 3132.62 MB
Moving model(s) has taken 6.04 seconds
Distilled CFG Scale: 3.5
[Unload] Trying to free 9411.13 MB for cuda:0 with 0 models keep loaded ... Current free memory is 1395.05 MB ... Unload model JointTextEncoder Done.
[Memory Management] Target: KModel, Free GPU: 5104.41 MB, Model Require: 6246.84 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: -2166.43 MB, CPU Swap Loaded (blocked method): 3113.55 MB, GPU Loaded: 3133.29 MB
Moving model(s) has taken 10.40 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:10<00:00, 6.51s/it]
Press any key to continue . . . ███████████████████████████████████████████████████████| 20/20 [01:57<00:00, 6.27s/it]
I use flux1-dev-bnb-nf4-v2.safetensors, and clip_l.savetensors, whereas if I just use the built-in standard SDXL, nothing happens
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Specs : RTX 3050 Mobile 6GB, AMD Ryzen 7 8845HS, 16GB
this is what is written in CMD
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-636-gb835f24a
Commit hash: b835f24
Launching Web UI with arguments:
Total VRAM 6144 MB, total RAM 15631 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 6GB Laptop GPU : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
C:\Users\yusuf\OneDrive\Desktop\ForgeUI\system\python\lib\site-packages\transformers\utils\hub.py:128: FutureWarning: Using
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead.warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: C:\Users\yusuf\OneDrive\Desktop\ForgeUI\webui\models\ControlNetPreprocessor
2025-01-26 02:37:23,975 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': 'C:\Users\yusuf\OneDrive\Desktop\ForgeUI\webui\models\Stable-diffusion\flux1-dev-bnb-nf4-v2.safetensors', 'hash': 'f0770152'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Startup time: 26.6s (prepare environment: 6.2s, launcher: 0.7s, import torch: 12.2s, initialize shared: 0.3s, other imports: 0.6s, load scripts: 2.6s, create ui: 2.7s, gradio launch: 1.2s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 83.33% GPU memory (5119.00 MB) to load weights, and use 16.67% GPU memory (1024.00 MB) to do matrix computation.
Loading Model: {'checkpoint_info': {'filename': 'C:\Users\yusuf\OneDrive\Desktop\ForgeUI\webui\models\Stable-diffusion\flux1-dev-bnb-nf4-v2.safetensors', 'hash': 'f0770152'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'transformer': 1722, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0}
Using Detected T5 Data Type: torch.float8_e4m3fn
Using Detected UNet Type: nf4
Using pre-quant state dict!
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.bfloat16}
Model loaded in 2.7s (unload existing model: 0.3s, forge model load: 2.5s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
[Unload] Trying to free 7725.00 MB for cuda:0 with 0 models keep loaded ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 5144.00 MB, Model Require: 5154.62 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: -1034.62 MB, CPU Swap Loaded (blocked method): 2094.38 MB, GPU Loaded: 3132.62 MB
Moving model(s) has taken 6.04 seconds
Distilled CFG Scale: 3.5
[Unload] Trying to free 9411.13 MB for cuda:0 with 0 models keep loaded ... Current free memory is 1395.05 MB ... Unload model JointTextEncoder Done.
[Memory Management] Target: KModel, Free GPU: 5104.41 MB, Model Require: 6246.84 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: -2166.43 MB, CPU Swap Loaded (blocked method): 3113.55 MB, GPU Loaded: 3133.29 MB
Moving model(s) has taken 10.40 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:10<00:00, 6.51s/it]
Press any key to continue . . . ███████████████████████████████████████████████████████| 20/20 [01:57<00:00, 6.27s/it]
I use flux1-dev-bnb-nf4-v2.safetensors, and clip_l.savetensors, whereas if I just use the built-in standard SDXL, nothing happens
Please help me
Beta Was this translation helpful? Give feedback.
All reactions