You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently in the process of building a front end back end interface that allows me to use your extension to queue up jobs from a website. A lot of it I've got working. But I'm really struggling to get a detailer to work with the /agent-scheduler/v1/queue/img2img api.
The reference shows that I should be using "alwayson_scripts": {}, and currently that's what I'm using when doing normal image Generations, and it works fine using it that way. But when I try to use it under the API reference above, it just doesn't engage ADetailer at all. When I manually set up a job it automatic 1111 under the in-paint tab in img2img, and enqueue the job the payload that it sends looks like this:
as you can see its using extra_generation_params instead of "alwayson_scripts": {},
its not giving me any errors, the images do have a face/body. but it just does not seem to engage with it when i try either way.
here is the payload i have built up:
{
"prompt: "portrait of a woman",
"width": 1024,
"height": 1024,
"sampler_name": "DPM++ 2M SDE Turbo",
"cfg_scale": 3,
"alwayson_scripts": {
"ADetailer": {
"args": [
true,
true,
{
"ad_model": "face_yolov8n.pt",
"ad_checkpoint": "dreamshaperXL_lightningInpaint.safetensors",
"ad_use_checkpoint": true,
"ad_cfg_scale": 2.5,
"ad_denoising_strength": 0.4,
"ad_inpaint_width": 1024,
"ad_inpaint_height": 1024,
"ad_mask_blur": 8,
"ad_dilate_erode": 16,
"ad_use_inpaint_width_height": true
},
false
]
}
},
"steps": 12,
"checkpoint": "dreamshaperXL_lightningInpaint.safetensors",
"denoising_strength": 1,
"is_using_inpainting_conditioning": true,
"inpainting_fill": 0,
"inpainting_mask_invert": 0,
"include_init_images": true,
"mask_blur_x": 12,
"mask_blur_y": 12
}
When i am just generating a image using the txt2img api, the adetailer works fine, see my jason package below:
Payload: {
"prompt": "Portrait painting, (Regal figure in flowing robes:1.3), Balanced composition, Crowned head, Commanding presence, (Rich jewel tones:1.2), Intricate details, Majestic backdrop, Royal air, (Soft diffused light:1.3)",
"cfg_scale": "4",
"steps": "8",
"height": 1024,
"width": 1024,
"seed": "-1",
"checkpoint": "SDXL-Turbo-LCM\RealitiesEdgeXLLIGHTNING_TURBOV7.safetensors",
"alwayson_scripts": {
"ADetailer": {
"args": [
true,
true,
{
"ad_model": "face_yolov8n.pt",
"ad_checkpoint": "dreamshaperXL_lightningInpaint.safetensors",
"ad_use_checkpoint": true,
"ad_cfg_scale": 2.5,
"ad_denoising_strength": 0.4,
"ad_inpaint_width": 1024,
"ad_inpaint_height": 1024,
"ad_mask_blur": 8,
"ad_dilate_erode": 16,
"ad_use_inpaint_width_height": true
},
false
]
}
},
"enable_hr": false,
"hr_upscaler": "4x_NMKD-Siax_200k",
"hr_second_pass_steps": 10,
"denoising_strength": 0.2,
"hr_scale": 2,
"hr_resize_x": 0,
"hr_resize_y": 0,
"hr_checkpoint_name": "Use same checkpoint",
"hr_prompt": "",
"hr_negative_prompt": "",
"sampler_name": "DPM++ 2M SDE Turbo"
}
I guess what i would like to know is there someone who can give me a good example of what the json package should look like for this when using the img2img api model to do inpainting and what the adetailer would look like.
The text was updated successfully, but these errors were encountered:
I am currently in the process of building a front end back end interface that allows me to use your extension to queue up jobs from a website. A lot of it I've got working. But I'm really struggling to get a detailer to work with the /agent-scheduler/v1/queue/img2img api.
The reference shows that I should be using "alwayson_scripts": {}, and currently that's what I'm using when doing normal image Generations, and it works fine using it that way. But when I try to use it under the API reference above, it just doesn't engage ADetailer at all. When I manually set up a job it automatic 1111 under the in-paint tab in img2img, and enqueue the job the payload that it sends looks like this:
"prompt": "portrait of a woman",
"all_prompts": ["portrait of a woman"],
"negative_prompt": "",
"all_negative_prompts": [""],
"seed": 1800119224,
"all_seeds": [1800119224],
"subseed": 1047218332,
"all_subseeds": [1047218332],
"subseed_strength": 0,
"width": 1024,
"height": 1024,
"sampler_name": "DPM++ 2M SDE Turbo",
"cfg_scale": 3,
"steps": 13,
"batch_size": 1,
"restore_faces": false,
"face_restoration_model": null,
"sd_model_name": "dreamshaperXL_lightningInpaint",
"sd_model_hash": "1a49cd4473",
"sd_vae_name": null,
"sd_vae_hash": null,
"seed_resize_from_w": -1,
"seed_resize_from_h": -1,
"denoising_strength": 1,
"extra_generation_params": {"ADetailer model": "face_yolov8n.pt",
"ADetailer confidence": 0.3,
"ADetailer dilate erode": 16,
"ADetailer mask blur": 8,
"ADetailer denoising strength": 0.4,
"ADetailer inpaint only masked": true,
"ADetailer inpaint padding": 32,
"ADetailer use separate steps": true,
"ADetailer steps": 6,
"ADetailer use separate checkpoint": true,
"ADetailer checkpoint": "dreamshaperXL_lightningInpaint.safetensors [1a49cd4473]",
"ADetailer use separate sampler": true,
"ADetailer sampler": "DPM++ 2M SDE Turbo",
"ADetailer model 2nd": "person_yolov8n-seg.pt",
"ADetailer confidence 2nd": 0.3,
"ADetailer dilate erode 2nd": 4,
"ADetailer mask blur 2nd": 8,
"ADetailer denoising strength 2nd": 0.4,
"ADetailer inpaint only masked 2nd": true,
"ADetailer inpaint padding 2nd": 32,
"ADetailer version": "24.5.1",
"Denoising strength": 1,
"Mask blur": 12,
"Inpaint area": "Only masked",
"Masked area padding": 56,
"Masked content": "fill"},
"index_of_first_image": 0,
}
as you can see its using extra_generation_params instead of "alwayson_scripts": {},
its not giving me any errors, the images do have a face/body. but it just does not seem to engage with it when i try either way.
here is the payload i have built up:
{
"prompt: "portrait of a woman",
"width": 1024,
"height": 1024,
"sampler_name": "DPM++ 2M SDE Turbo",
"cfg_scale": 3,
"alwayson_scripts": {
"ADetailer": {
"args": [
true,
true,
{
"ad_model": "face_yolov8n.pt",
"ad_checkpoint": "dreamshaperXL_lightningInpaint.safetensors",
"ad_use_checkpoint": true,
"ad_cfg_scale": 2.5,
"ad_denoising_strength": 0.4,
"ad_inpaint_width": 1024,
"ad_inpaint_height": 1024,
"ad_mask_blur": 8,
"ad_dilate_erode": 16,
"ad_use_inpaint_width_height": true
},
false
]
}
},
"steps": 12,
"checkpoint": "dreamshaperXL_lightningInpaint.safetensors",
"denoising_strength": 1,
"is_using_inpainting_conditioning": true,
"inpainting_fill": 0,
"inpainting_mask_invert": 0,
"include_init_images": true,
"mask_blur_x": 12,
"mask_blur_y": 12
}
When i am just generating a image using the txt2img api, the adetailer works fine, see my jason package below:
Payload: {
"prompt": "Portrait painting, (Regal figure in flowing robes:1.3), Balanced composition, Crowned head, Commanding presence, (Rich jewel tones:1.2), Intricate details, Majestic backdrop, Royal air, (Soft diffused light:1.3)",
"cfg_scale": "4",
"steps": "8",
"height": 1024,
"width": 1024,
"seed": "-1",
"checkpoint": "SDXL-Turbo-LCM\RealitiesEdgeXLLIGHTNING_TURBOV7.safetensors",
"alwayson_scripts": {
"ADetailer": {
"args": [
true,
true,
{
"ad_model": "face_yolov8n.pt",
"ad_checkpoint": "dreamshaperXL_lightningInpaint.safetensors",
"ad_use_checkpoint": true,
"ad_cfg_scale": 2.5,
"ad_denoising_strength": 0.4,
"ad_inpaint_width": 1024,
"ad_inpaint_height": 1024,
"ad_mask_blur": 8,
"ad_dilate_erode": 16,
"ad_use_inpaint_width_height": true
},
false
]
}
},
"enable_hr": false,
"hr_upscaler": "4x_NMKD-Siax_200k",
"hr_second_pass_steps": 10,
"denoising_strength": 0.2,
"hr_scale": 2,
"hr_resize_x": 0,
"hr_resize_y": 0,
"hr_checkpoint_name": "Use same checkpoint",
"hr_prompt": "",
"hr_negative_prompt": "",
"sampler_name": "DPM++ 2M SDE Turbo"
}
I guess what i would like to know is there someone who can give me a good example of what the json package should look like for this when using the img2img api model to do inpainting and what the adetailer would look like.
The text was updated successfully, but these errors were encountered: