[Note] Changing Model Loading Behavior #964
Replies: 12 comments 10 replies
-
Fantastic, and I had just written a python script that lists my models and lets me select one to load using --ckpt at launch a week ago :,D |
Beta Was this translation helpful? Give feedback.
-
Can this setting be made optional? Except for FLUX, it is much more comfortable to load the model at startup. |
Beta Was this translation helpful? Give feedback.
-
This might be the cause of the issue I've had where the XYZ Plot script isn't loading in checkpoints. Instead of loading from the selected list (I wanted to run my prompt across a number of checkpoints to test how they respond ), instead it just reloads the last main loaded checkpoint, repeatedly outputs off of that, and then loses it when it want to build an image grid, because all the images are from the same checkpoint ¯_(ツ)_/¯. Also if you are thinking to fix that, could you add in the ability to have different high rez Checkpoints as an X/Y/Z option. ? I personally like to use one checkpoint for the initial generation and use a contrasting style for the upscale/denoising output, an it would be a very cool to be able to run that through the X/Y/Z script. |
Beta Was this translation helpful? Give feedback.
-
I think this might relate to my issue, that the API seems unable to change the model with overide_settings anymore. |
Beta Was this translation helpful? Give feedback.
-
Please please make this optional. This is now the bulk of my render time.Waiting for models to load and unload after each and every render. |
Beta Was this translation helpful? Give feedback.
-
This was very necessary with how long Flux models take to load. The bulk of my processing times now though are just the models loading, so I have an idea! How about something in between this and how it was? Say, it doesn’t load a model the minute you select it, but once you generate with that model, it swaps to that model and keeps it loaded in the background until you generate with a new model? Is that possible? That way, it doesn’t have to load every generation, it loads on the first generation only and is then ready to go for future generations. |
Beta Was this translation helpful? Give feedback.
-
so how do you load a model and gen through api now used to just be sd_models.load_model but that's empty now edit: this works from the other thread "/sdapi/v1/options and post { sd_model_checkpoint: modelname.safetensors }" edit2: actually it worked as in forge received the request, but for my use case it said can't find the model even though it showed the exact location for it in the error tried hacking together to load from a separate script temporarily but when I send a generate request after it for some reason it makes greyscale scuffed images. I'm a little hesitant to do outside workarounds especially with limited knowledge weirdly the model gens images fine via manual action through the ui itself, and when I send the same gen request without loading any models from outside it works fine |
Beta Was this translation helpful? Give feedback.
-
so u wanted to save some time for initial model loading, and ended up causing huge amount of unpredictable unloading and subsequent reloading of models in between generations? maybe it would be better to revert to how it was before, and just disable the initial startup model loading? |
Beta Was this translation helpful? Give feedback.
-
why cant an extension load a new checkpoint? |
Beta Was this translation helpful? Give feedback.
-
I'm applying this crude (- likely to break -) workaround patch for the time-being. It checks for checkpoint override info and loads it just ahead of processing, then loads the old one back once processing is done. save this into a
|
Beta Was this translation helpful? Give feedback.
-
@lllyasviel, except extensions can't change the checkpoint now, like adetailer. That is more frustrating. I recommend switching it back to the previous loading method, and just be more careful when selecting a checkpoint to load. Or provide a way for extensions to change the checkpoint. |
Beta Was this translation helpful? Give feedback.
-
Please for the love of forge, change it back and put in alternate fix. Why break so much critical stuff? API runs like garbage with this. |
Beta Was this translation helpful? Give feedback.
-
In the past two years, every time you opened WebUI or selected a model, that model would load immediately. However, with current technology, loading a Flux model takes about 40 seconds (yours might be faster - my ssd is somewhat a budget one) on my 2023 computer, which has a modern SSD, motherboard, and decent GPUs.
This means if I switch models twice or want to use a different one, I waste 40 seconds loading the wrong initial model. It's very frustrating.
To fix this, model loading will now only happen when you click "Generate." This change might break some extensions that need to read model info before generating because no model will load before the first generate. But this is necessary to move forward.
Beta Was this translation helpful? Give feedback.
All reactions