Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: xyz lora strength with multiple loras #3514

Open
2 tasks done
SAC020 opened this issue Oct 24, 2024 · 4 comments
Open
2 tasks done

[Feature]: xyz lora strength with multiple loras #3514

SAC020 opened this issue Oct 24, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@SAC020
Copy link

SAC020 commented Oct 24, 2024

Issue Description

I try to generate an xyz with a number of loras (same lora, different epochs) at various strenghts.

Expected behavior: iterate each lora for 0.2, 0.5, 1.5 strengths

image

In reality, it does:

Lora1 1.0 / 0.2 / 0.5
Lora2 1.5 / 0.2 / 0.5
Lora3 1.5 / 0.2 / 0.5

Log excerpts below, I've tried to keep the clutter at a minimum

Full log attached as a file

130855-807475 INFO XYZ grid sta.txt

11:33:39-659849 INFO XYZ grid start: images=30 grid=1 shape=10x3 cells=1 steps=1200
11:33:39-661843 DEBUG XYZ grid apply LoRA: "sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000000500"
11:33:39-662840 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=0.2
11:33:40-580892 INFO Load network: type=LoRA
apply=['sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000000500'] te=[1.0]
unet=[[1.0, 1.0, 1.0]] dims=[None] load=0.92

12:29:19-694640 DEBUG XYZ grid apply LoRA: "sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000000500"
12:29:19-695638 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=0.5
12:29:28-870061 INFO Load network: type=LoRA
apply=['sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000000500'] te=[0.2]
unet=[[0.2, 0.2, 0.2]] dims=[None] load=9.17

12:30:19-562619 DEBUG XYZ grid apply LoRA: "sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000000500"
12:30:19-563617 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=1.5
12:30:20-238280 INFO Load network: type=LoRA
apply=['sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000000500'] te=[0.5]
unet=[[0.5, 0.5, 0.5]] dims=[None] load=0.67

12:31:03-149304 DEBUG XYZ grid apply LoRA: "sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000001000"
12:31:03-150792 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=0.2

12:31:04-668023 INFO Load network: type=LoRA
apply=['sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000001000'] te=[1.5]
unet=[[1.5, 1.5, 1.5]] dims=[None] load=1.52

12:31:47-490868 DEBUG XYZ grid apply LoRA: "sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000001000"
12:31:47-491867 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=0.5
12:31:48-185527 INFO Load network: type=LoRA
apply=['sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000001000'] te=[0.2]
unet=[[0.2, 0.2, 0.2]] dims=[None] load=0.69

12:32:31-703321 DEBUG XYZ grid apply LoRA: "sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000001000"
12:32:31-704319 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=1.5
12:32:32-421913 INFO Load network: type=LoRA
apply=['sacbf-dalle-flux-v3-colab-concept-concise_captions-adam_000001000'] te=[0.5]
unet=[[0.5, 0.5, 0.5]] dims=[None] load=0.72

Version Platform Description

PS C:\ai\automatic> .\webui --debug --medvram
Using VENV: C:\ai\automatic\venv
13:45:14-797851 INFO Starting SD.Next
13:45:14-800843 INFO Logger: file="C:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
13:45:14-801841 INFO Python: version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\Python.exe"
venv="C:\ai\automatic\venv"
13:45:15-079447 INFO Version: app=sd.next updated=2024-10-21 hash=cc4f4819 branch=dev
url=https://github.com/vladmandic/automatic/tree/dev ui=dev
13:45:15-930345 INFO Repository latest available 0d332ca 2024-10-23T18:59:10Z
13:45:15-943385 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
release=Windows-10-10.0.22631-SP0 python=3.11.9

Relevant log output

No response

Backend

Diffusers

UI

Standard

Branch

Dev

Model

Other

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue
@SAC020 SAC020 changed the title [Issue]: xyz not iterating lora strength correctly [Issue]: xyz not passing lora strength correctly Oct 24, 2024
@vladmandic
Copy link
Owner

lora strength is applied by modifying extra_networks_default_multiplier setting which means its global for all loras.
this is a valid request, but i'll have to treat it as feature request.

@vladmandic vladmandic changed the title [Issue]: xyz not passing lora strength correctly [Feature]: xyz lora strength with multiple loras Oct 24, 2024
@vladmandic vladmandic added the enhancement New feature or request label Oct 24, 2024
@SAC020
Copy link
Author

SAC020 commented Oct 24, 2024

I am not sure I explained very clearly.

I don't have other loras in the prompt, it's just those on X. The problem is some sort of "delayed" action or precedence: first iteration, the lora is called with strength 1 when xyz says otherwise, then every subsequent iteration is the chain is shifted by 1 step.

It's something like:

for i = 1 to max_iterations
{
set xyz_lora_strength = 0.2;
call lora (default_strength);
set default_strength = xyz_lora_strength;
i = i+1;
}

(of course it's my aproximation of the script, but it's exactly the behavior I am seeing)

@vladmandic
Copy link
Owner

i understand.

@SAC020
Copy link
Author

SAC020 commented Nov 9, 2024

Simplified example: asked for strengths 0 and 0, should produce identical outputs (zero strength)

image

Output

image

Log excerpt

08:04:26-760520 INFO XYZ grid start: images=2 grid=1 shape=1x2 cells=1 steps=40
08:04:26-762516 DEBUG XYZ grid process: x=1/1 y=1/2 z=1/1 total=0.50
08:04:26-764510 DEBUG XYZ grid apply LoRA: "amateurphoto-v6-forcu"

08:04:26-765507 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=0.0

08:04:28-536586 INFO Load network: type=LoRA apply=['amateurphoto-v6-forcu'] te=[1.0] unet=[[1.0, 1.0, 1.0]] dims=[None] load=1.77

08:05:26-346604 INFO Processed: images=1 its=0.34 time=59.58 timers={'init': 1.77, 'prepare': 0.03, 'args': 0.03, 'callback': 3.21,
'pipeline': 55.06, 'decode': 2.66, 'post': 0.03} memory={'ram': {'used': 29.18, 'total': 63.92}, 'gpu': {'used':
2.66, 'total': 23.99}, 'retries': 0, 'oom': 0}
08:05:26-349595 DEBUG XYZ grid process: x=1/1 y=2/2 z=1/1 total=1.00
08:05:26-350593 DEBUG XYZ grid apply LoRA: "amateurphoto-v6-forcu"

08:05:26-352587 DEBUG XYZ grid apply setting: extra_networks_default_multiplier=0.0

08:05:28-691847 INFO Load network: type=LoRA apply=['amateurphoto-v6-forcu'] te=[0.0] unet=[[0.0, 0.0, 0.0]] dims=[None] load=2.34

08:05:59-622190 INFO Processed: images=1 its=0.60 time=33.27 timers={'init': 2.34, 'callback': 3.6, 'pipeline': 30.44, 'decode': 0.44,
'post': 0.03} memory={'ram': {'used': 29.3, 'total': 63.92}, 'gpu': {'used': 7.33, 'total': 23.99}, 'retries': 0,
'oom': 0}
08:05:59-668068 INFO XYZ grid complete: images=2 size=(1792, 2322) time=92.86 save=0.04
08:05:59-670062 DEBUG XYZ grid: grid=1 subgrids=0 images=2 total=3
08:05:59-687018 INFO Processed: images=3 its=3760.02 time=0.02 timers={'init': 2.39, 'callback': 3.6, 'pipeline': 30.44, 'decode':
0.44, 'post': 0.05} memory={'ram': {'used': 29.31, 'total': 63.92}, 'gpu': {'used': 7.33, 'total': 23.99},
'retries': 0, 'oom': 0}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants