Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add performance statistics for image generation #1405

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

xufang-lisa
Copy link
Contributor

No description provided.

@github-actions github-actions bot added category: text to image Text 2 image pipeline category: Python API Python API for GenAI category: samples GenAI samples category: GenAI C++ API Changes in GenAI C++ public headers labels Dec 18, 2024
@ilya-lavrenov
Copy link
Contributor

Could you please provide example of such prints?
E.g. for SDXL and FLUX

@likholat
Copy link
Contributor

likholat commented Dec 20, 2024

@xufang-lisa let's add a custom struct ImageGenerationPerfMetrics and collect the metrics there:

struct OPENVINO_GENAI_EXPORTS ImageGenerationPerfMetrics {
    float load_time; // model load time (includes reshape & read_model time)
    float generate_duration; // duration of method generate(...)

    MeanStdPair iteration_duration; // Mean-Std time of one generation iteration
    std::map<std::string, float> encoder_inference_duration; // inference durations for each encoder
    MeanStdPair unet_inference_duration; // inference duration for unet model, should be filled with zeros if we don't have unet
    MeanStdPair transformer_inference_duration; // inference duration for transformer model, should be filled with zeros if we don't have transformer
    float vae_encoder_inference_duration; // inference duration of vae_encoder model, should be filled with zeros if we don't use it
    float vae_decoder_inference_duration; // inference duration of  vae_decoder model
 
    bool m_evaluated = false;
 
    RawImageGenerationPerfMetrics raw_metrics;
};
 
struct OPENVINO_GENAI_EXPORTS RawImageGenerationPerfMetrics {
    std::vector<MicroSeconds> unet_inference_durations; // unet durations for each step
    std::vector<MicroSeconds> transformer_inference_durations; // transformer durations for each step
    std::vector<MicroSeconds> iteration_durations;  //  durations of each step
};

I'd also like to propose return ov::Tensor as a generate method output and add get_perfomance_metrics() method

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: GenAI C++ API Changes in GenAI C++ public headers category: Python API Python API for GenAI category: samples GenAI samples category: text to image Text 2 image pipeline
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants