Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IP-Adapter support for StableDiffusion3ControlNetPipeline #10363

Merged
merged 5 commits into from
Jan 2, 2025

Conversation

guiyrt
Copy link
Contributor

@guiyrt guiyrt commented Dec 23, 2024

What does this PR do?

Inherit from SD3IPAdapterMixin to allow image prompting.

Fixes #10129

Before submitting

Who can review?

@hlky
@yiyixuxu

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@SahilCarterr
Copy link
Contributor

Can you show some examples images? @guiyrt

@guiyrt
Copy link
Contributor Author

guiyrt commented Dec 23, 2024

Here are a few examples using stabilityai/stable-diffusion-3.5-large-controlnet-canny and InstantX/SD3.5-Large-IP-Adapter:

Inference code
import torch
from PIL import Image

from diffusers.models import SD3ControlNetModel
from diffusers.image_processor import VaeImageProcessor
from diffusers import StableDiffusion3ControlNetPipeline
from transformers import SiglipVisionModel, SiglipImageProcessor


class SD3CannyImageProcessor(VaeImageProcessor):
    def __init__(self):
        super().__init__(do_normalize=False)
    def preprocess(self, image, **kwargs):
        image = super().preprocess(image, **kwargs)
        image = image * 255 * 0.5 + 0.5
        return image
    def postprocess(self, image, do_denormalize=True, **kwargs):
        do_denormalize = [True] * image.shape[0]
        image = super().postprocess(image, **kwargs, do_denormalize=do_denormalize)
        return image

model_id = "stabilityai/stable-diffusion-3.5-large"
image_encoder_id = "google/siglip-so400m-patch14-384"
ip_adapter_id = "InstantX/SD3.5-Large-IP-Adapter"
controlnet_id= "stabilityai/stable-diffusion-3.5-large-controlnet-canny"

controlnet = SD3ControlNetModel.from_pretrained(
    controlnet_id, torch_dtype=torch.float16
)

feature_extractor = SiglipImageProcessor.from_pretrained(
    image_encoder_id, torch_dtype=torch.float16
)

image_encoder = SiglipVisionModel.from_pretrained(
    image_encoder_id, torch_dtype=torch.float16
)

pipe = StableDiffusion3ControlNetPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    feature_extractor=feature_extractor,
    image_encoder=image_encoder,
    controlnet=controlnet
)
pipe.image_processor = SD3CannyImageProcessor()

# Load IP Adapter
pipe.load_ip_adapter(ip_adapter_id, revision="f1f54ca369ae759f9278ae9c87d46def9f133c78")
pipe.set_ip_adapter_scale(0.5)
pipe._exclude_from_cpu_offload.append("image_encoder")
pipe.enable_sequential_cpu_offload()

# Input
controlnet_image = Image.open("canny.jpg").convert('RGB')
ip_adapter_img = Image.open("image.jpg").convert('RGB')

# please note that SD3.5 Large is sensitive to highres generation like 1536x1536
image = pipe(
    width=1024,
    height=1024,
    prompt="a fox with trees in the background",
    negative_prompt="lowres, low quality, worst quality",
    num_images_per_prompt=4,
    generator=torch.manual_seed(42),
    ip_adapter_image=ip_adapter_img,
    control_image=controlnet_image,
    controlnet_conditioning_scale=1.0,
    guidance_scale=3.5,
    num_inference_steps=60,
).images[0]

image.save(f"result.jpg")

Here I used the original image as input for the IP-Adapter:
batman_grid

These results look awesome, and using the IP-Adapter helps a lot, check some outputs without image prompt:
batman_no_ipa_grid

Here I tried to use different image prompts to change the background:
foxes_grid

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@hlky hlky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @guiyrt! The examples are great 🤗

@guiyrt
Copy link
Contributor Author

guiyrt commented Jan 2, 2025

Anything left here @hlky? should we also add IP-Adapter support for inpainting controlnet pipeline?

@hlky
Copy link
Collaborator

hlky commented Jan 2, 2025

All good I think @guiyrt. Let's just wait for another review from @yiyixuxu. Yes we can add IPAdapter to that pipeline and to SD3's img2img and inpaint pipelines.

@yiyixuxu yiyixuxu merged commit 68bd693 into huggingface:main Jan 2, 2025
12 checks passed
@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Jan 2, 2025

thanks @hlky @guiyrt !

@guiyrt guiyrt deleted the sd3-controlnet-ipadapter branch January 3, 2025 18:30
@briannlongzhao
Copy link

Does this work similarly for depth control? Can you share any instructions or code examples of how to use it with depth map? Thanks!

@guiyrt
Copy link
Contributor Author

guiyrt commented Jan 13, 2025

Does this work similarly for depth control? Can you share any instructions or code examples of how to use it with depth map? Thanks!

Yes, it does! The inference code I used to test with the canny controlnet is here:

Inference code
import torch
from PIL import Image

from diffusers.models import SD3ControlNetModel
from diffusers.image_processor import VaeImageProcessor
from diffusers import StableDiffusion3ControlNetPipeline
from transformers import SiglipVisionModel, SiglipImageProcessor


class SD3CannyImageProcessor(VaeImageProcessor):
    def __init__(self):
        super().__init__(do_normalize=False)
    def preprocess(self, image, **kwargs):
        image = super().preprocess(image, **kwargs)
        image = image * 255 * 0.5 + 0.5
        return image
    def postprocess(self, image, do_denormalize=True, **kwargs):
        do_denormalize = [True] * image.shape[0]
        image = super().postprocess(image, **kwargs, do_denormalize=do_denormalize)
        return image

model_id = "stabilityai/stable-diffusion-3.5-large"
image_encoder_id = "google/siglip-so400m-patch14-384"
ip_adapter_id = "InstantX/SD3.5-Large-IP-Adapter"
controlnet_id= "stabilityai/stable-diffusion-3.5-large-controlnet-canny"

controlnet = SD3ControlNetModel.from_pretrained(
    controlnet_id, torch_dtype=torch.float16
)

feature_extractor = SiglipImageProcessor.from_pretrained(
    image_encoder_id, torch_dtype=torch.float16
)

image_encoder = SiglipVisionModel.from_pretrained(
    image_encoder_id, torch_dtype=torch.float16
)

pipe = StableDiffusion3ControlNetPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    feature_extractor=feature_extractor,
    image_encoder=image_encoder,
    controlnet=controlnet
)
pipe.image_processor = SD3CannyImageProcessor()

# Load IP Adapter
pipe.load_ip_adapter(ip_adapter_id, revision="f1f54ca369ae759f9278ae9c87d46def9f133c78")
pipe.set_ip_adapter_scale(0.5)
pipe._exclude_from_cpu_offload.append("image_encoder")
pipe.enable_sequential_cpu_offload()

# Input
controlnet_image = Image.open("canny.jpg").convert('RGB')
ip_adapter_img = Image.open("image.jpg").convert('RGB')

# please note that SD3.5 Large is sensitive to highres generation like 1536x1536
image = pipe(
    width=1024,
    height=1024,
    prompt="a fox with trees in the background",
    negative_prompt="lowres, low quality, worst quality",
    num_images_per_prompt=4,
    generator=torch.manual_seed(42),
    ip_adapter_image=ip_adapter_img,
    control_image=controlnet_image,
    controlnet_conditioning_scale=1.0,
    guidance_scale=3.5,
    num_inference_steps=60,
).images[0]

image.save(f"result.jpg")

You just need to change the controlnet_id to stabilityai/stable-diffusion-3.5-large-controlnet-depth and pass a depth image instead of canny image. You also don't need SD3CannyImageProcessor, you can check the controlnet model page for details on preprocessing. Let me know if you need help with that :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Does StableDiffusion3 have an image2image pipeline with ControlNet?
6 participants