Skip to content

Commit

Permalink
Changed the model for Linux pipeline
Browse files Browse the repository at this point in the history
  • Loading branch information
ilya-lavrenov committed Sep 19, 2024
1 parent 27d4115 commit d890213
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 9 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/stable_diffusion_1_5_cpp.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,12 +62,12 @@ jobs:
- name: Download and convert models and tokenizer
run: |
source openvino_sd_cpp/bin/activate
optimum-cli export openvino --model botp/stable-diffusion-v1-5 --weight-format fp16 --task stable-diffusion models/stable_diffusion_v1_5_ov/FP16
optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --weight-format fp16 --task stable-diffusion models/dreamlike-art-dreamlike-anime-1.0/FP16
- name: Run app
run: |
source ${{ env.OV_INSTALL_DIR }}/setupvars.sh
./build/samples/cpp/stable_diffusion/stable_diffusion ./models/stable_diffusion_v1_5_ov/FP16 "cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting"
./build/samples/cpp/stable_diffusion/stable_diffusion ./models/dreamlike-art-dreamlike-anime-1.0/FP16 "cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting"
stable_diffusion_1_5_cpp-windows:
runs-on: windows-latest
Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/stable_diffusion/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ set_target_properties(stable_diffusion PROPERTIES
install(TARGETS stable_diffusion
RUNTIME DESTINATION samples_bin/
COMPONENT samples_bin
EXCLUDE_FROM_ALL)
EXCLUDE_FROM_ALL)
12 changes: 6 additions & 6 deletions samples/cpp/stable_diffusion/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Stable Diffusion C++ Image Generation Pipeline

This example showcases inference of text to image models like Stable Diffusion 1.x, 2.x, LCM. The application doesn't have many configuration options to encourage the reader to explore and modify the source code. For example, change the device for inference to GPU. The sample features `ov::genai::Text2ImagePipeline` and uses a text prompt as input source.
This example showcases inference of text to image models like Stable Diffusion 1.5, 2.1, LCM. The application doesn't have many configuration options to encourage the reader to explore and modify the source code. For example, change the device for inference to GPU. The sample features `ov::genai::Text2ImagePipeline` and uses a text prompt as input source.

Users can change the sample code and play with the following generation parameters:

Expand All @@ -18,16 +18,16 @@ It's not required to install [../../requirements.txt](../../requirements.txt) fo

```sh
pip install --upgrade-strategy eager -r ../../requirements.txt
optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --task stable-diffusion --weight-format fp16 dreamlike_anime_1_0_ov/FP16`
optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --task stable-diffusion --weight-format fp16 dreamlike_anime_1_0_ov/FP16
```

## Run

`stable_diffusion ./dreamlike_anime_1_0_ov/FP16 'cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting'`
`stable_diffusion ./dreamlike_anime_1_0_ov/FP16 'cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting'`

### Examples

Prompt: `cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting`
Prompt: `cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting`

![](./512x512.bmp)

Expand All @@ -39,8 +39,8 @@ Models can be downloaded from [OpenAI HiggingFace](https://huggingface.co/openai
- [dreamlike-art/dreamlike-anime-1.0](https://huggingface.co/dreamlike-art/dreamlike-anime-1.0)
- [SimianLuo/LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7)

## Troubleshooting
## Note

- Image generated with HuggingFace / Optimum Intel is not the same generated by this C++ sample:

C++ random generation with MT19937 results differ from `numpy.random.randn()` and `diffusers.utils.randn_tensor`. So, it's expected that image generated by Python and C++ versions provide different images, because latent images are initialize differently. Users can implement its own random generator derived from `ov::genai::Generator` and pass it to `Text2ImagePipeline::generate` method.
C++ random generation with MT19937 results differ from `numpy.random.randn()` and `diffusers.utils.randn_tensor`. So, it's expected that image generated by Python and C++ versions provide different images, because latent images are initialize differently. Users can implement their own random generator derived from `ov::genai::Generator` and pass it to `Text2ImagePipeline::generate` method.

0 comments on commit d890213

Please sign in to comment.