Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark Code #27

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open

Benchmark Code #27

wants to merge 23 commits into from

Conversation

szeyu
Copy link
Contributor

@szeyu szeyu commented Aug 15, 2024

Benchmark

Allow users to test on themselves to get the benchmark of model(s) on different backend. It will analyse the Token In / Out throughput for you in a statistical manner

Benchmark a Model

To benchmark a model, run this

  • --backend cpu | ipex | openvino | directml
  • --model_name Name of the Model
  • --model_path Path to Model | Model Repo ID
  • --token_in Number of Input Tokens (Max 2048)
  • --token_out Number of Output Tokens
python ellm_benchmark.py --backend <cpu | ipex | openvino | directml> --model_name <Name of the Model> --model_path <Path to Model | Model Repo ID> --token_in <Number of Input Tokens (Max 2048)> --token_out <Number of Output Tokens>

Loop to benchmark the models

Customise your benchmarking config

# Define the models
model_names = [
    # model names

]

# Define the model paths
model_paths = [
    # path to model in order to model names / model repo id

]

# Define the token length
token_in_out = [
    (1024, 1024),
    (1024, 512),
    (1024, 256),
    (1024, 128),
    (512, 1024),
    (512, 512),
    (512, 256),
    (512, 128),
    (256, 1024),
    (256, 512),
    (256, 256),
    (256, 128),
    (128, 1024),
    (128, 512),
    (128, 256),
    (128, 128),
]

# Choose backend
backend = "cpu"
backend = "directml"
backend = "ipex"
backend = "openvino"

# Number of loops
loop_count = 20
python loop_ellm_benchmark.py

Generate a Report (XLSX) of a Model's Benchmark

To Generate report for a model, run this

  • --model_name Name of the Model
python analyse_detailed_benchmark.py --model_name <Name of the Model>

Generate Reports (XLSX) of Models' Benchmark

List out the models that you want to have report of benchmarking

model_names = [
    # model names
    
]
python loop_analyse_detailed_benchmark.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant