Replies: 1 comment
-
To focus more on the model itself, as mentioned in this line, the table only reports the model For pytorch/python users, especially those with multiple GPUs, we recommend using |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
NCNN usually upscales 480p images at a speed of 970 images per minute (16fps?) with "realesr-animevideov3-x2" on my Titan Xp. I can upscale hundreds of thousands of images in what I thought was a very short amount of time.
Then I found your comparison page which says your Nvidia V100 GPU can upscale 480p at 65.9 frames per SECOND on PyTorch. Am I reading this wrong? I get that the V100 is better than the Titan XP, but when I tried upscaling the same images using the Python script, it was ONE image per FEW SECONDS (Testing Image 1, Testing Image 2, etc). Extremely slow using the same model and same scaling in comparison to NCNN which seems to be considered by the community as much slower than Python. But that's not what I've ever experienced. Even with v2, Python was just as slow (1 image per 3 or so seconds), while v2 NCNN was upscaling at 450 images per minute.
I'm using the latest RealESRGAN master for Python animevideov3.
"python inference_realesrgan.py -n realesr-animevideov3 -s 2 -i input -o output" is the CMD paste that I'm using for your reference.
I was completely satisfied with NCNN's speed, but if I can get Python to be faster than NCNN, I'd like to know what's wrong.
Beta Was this translation helpful? Give feedback.
All reactions