Use for video processing and performance #67
Replies: 2 comments 1 reply
-
I guess the problem is not the speed, but the training model. The quality of removeBG is based on a model trained with milion of very well labeled images. Very hard to achive i guess:) |
Beta Was this translation helpful? Give feedback.
-
I wrote this Powershell script, this has solved the problem for me anyway i.e. this will transform any mp4 file into a transparent .mov and run it in parallel batches, maximising the utilisation on the machine -- probably running about 10x faster. Hope this helps someone out there, as far as I can tell this is the only easy example online of how to remove a background from a video non-interactively, fast and with good results
|
Beta Was this translation helpful? Give feedback.
-
Hey Daniel,
Congrats on building a fabulous tool here!
I posted an issue, I wasn't able to get it to install using pip on Anaconda/Python 3.8/Windows 10. When I manually cloned the repo and manually install torch and torch-vision I was able to run it from the command line.
My main interest in this is as a video content creator. I really need to be able to remove backgrounds from long videos at an industrial scale i.e. from podcast/webcam recordings. It is possible to do this using NVidia broadcast but it's a laborious, interactive process which must happen at 1x speed, and you need to hack around with different tools like splitcam etc. Incredibly, there is no simple command line to strip the background from video file! NVidia probably know they can monetise this later and are holding it back.
There are online services to do this and it's unbelievably expensive even for a tiny video (~15$ for 4 minute video!). Makes me think I should create a competing service, this is SUCH an important task for content creators!
I know that it would be possible to use your tool for this purpose, but it's extremely slow and doesn't utilise my GPU or CPU at all. I have a 2080GTX and 16 core AMD Ryzen, so beefy machine.
My process I am trialling is to to use ffmpeg to split up video file into fairly compressed 1920x1080 ~30kb jpgs, so for a 1:15:00 video at 24 fps you are looking at ~113k frames. Your tool is converting them at 2.7FPS, even with the GPU -- although there is no visible utilisation, it's CPU-bound. I noticed that if I split into 4 batches and run concurrently, I get the same images/sec. I could probably run about 20 instances on this machine before I got 100% CPU utilisation. I would imagine using a lower resolution would significantly improve the perf too.
My aim is to then use ffmpeg again to combine all the pngs into a new video with alpha channel, then I can use premiere to turn that video into an alpha mask on top of the original video so I don't lose quality.
So my question to you Daniel, is have you thought about making your /p "entire folder" option run in parallel and create batches? With a single instance I am looking at about 40 hours to process one 1:15:0 video, the parallelisation on my machine would probably get that down to about 1 hour or slightly more. It seems like you are so close to having a killer solution which will benefit so many.
I assume it would be better for you to do this inside the tool using threading, In the mean time I might think about writing some kind of meta tool to create batches and call yours. I am trying to think of the simplest possible way to do this, without resorting to using a database. Thinking I would need to have a fixed number of "worker nodes" which would dispatch jobs from a queue or something.
Best,
Tim
Beta Was this translation helpful? Give feedback.
All reactions