Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I use GPU acceleration? #55

Closed
luoluoluooo opened this issue Jun 23, 2023 · 6 comments
Closed

How can I use GPU acceleration? #55

luoluoluooo opened this issue Jun 23, 2023 · 6 comments

Comments

@luoluoluooo
Copy link

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory.

image

image

@David-Yan1
Copy link
Contributor

David-Yan1 commented Jun 23, 2023

enable_gpu really means enable cuda-accelerated terrain meshing and will be renamed. There is supposed to be GPU-accelerated rendering in the short step regardless of whether this flag is passed

@luoluoluooo
Copy link
Author

luoluoluooo commented Jun 23, 2023

ok,i got it,not all processes have GPU acceleration

@bssrdf
Copy link

bssrdf commented Jun 23, 2023

Actually the render part of the code enabled GPUs, regardless of whether enable_gpu flag is passed.

# render/render.py
def enable_gpu(engine_name = 'CYCLES'):
    # from: https://github.com/DLR-RM/BlenderProc/blob/main/blenderproc/python/utility/Initializer.py
    compute_device_type = None
    prefs = bpy.context.preferences.addons['cycles'].preferences
    # Use cycles
    bpy.context.scene.render.engine = engine_name
    bpy.context.scene.cycles.device = 'GPU'

    preferences = bpy.context.preferences.addons['cycles'].preferences
    for device_type in preferences.get_device_types(bpy.context):
        preferences.get_devices_for_type(device_type[0])

    for gpu_type in ['OPTIX', 'CUDA']:#, 'METAL']:
        found = False
        for device in preferences.devices:
            if device.type == gpu_type and (compute_device_type is None or compute_device_type == gpu_type):
                bpy.context.preferences.addons['cycles'].preferences.compute_device_type = gpu_type
                logger.info('Device {} of type {} found and used.'.format(device.name, device.type))
                found = True
                break
        if found:
            break

    # make sure that all visible GPUs are used
    for device in prefs.devices:   
            device. Use = True
   
    return prefs.devices


def render_image(
    camera_id,
    min_samples,
    num_samples,
    time_limit,
    frames_folder,
    adaptive_threshold,
    exposure,
    passes_to_save,
    flat_shading,
    use_dof=False,
    dof_aperture_fstop=2.8,
    motion_blur=False,
    motion_blur_shutter=0.5,
    render_resolution_override=None,
    excludes=[],
):
    tic = time.time()

    camera_rig_id, subcam_id = camera_id

    for exclude in excludes:
        bpy.data.objects[exclude].hide_render = True

    with Timer(f"Enable GPU"):
        devices = enable_gpu()

If I use tools/manage_datagen_jobs.py to generate images, GPU/CUDA is not being used in rendering step, even though the log file indicates GPU was used:

[00:08:01.074] [times] [INFO] | [Enable GPU]
[00:08:01.314] [rendering.render] [INFO] | Device NVIDIA GeForce GTX 1070 of type CUDA found and used.
[00:08:01.314] [rendering.render] [INFO] | Device NVIDIA GeForce GTX 1070 of type CUDA found and used = True.
[00:08:01.314] [rendering.render] [INFO] | Device Intel Core i7-6700 CPU @ 3.40GHz of type CPU found and used = False.
[00:08:01.314] [times] [INFO] | [Enable GPU] finished in 0:00:00.240083

However, if I manually execute render step as listed inrun_pipeline.sh, e.g.

nice -n 20 $BLENDER --background -y -noaudio --python generate.py -- --input_folder outputs/seaice6/0/fine_0_0_0048_0 --output_folder outputs/seaice6/0/frames_0_0_0048_0 --seed 0 --task render --task_uniqname short_0_0_0048_0 -g arctic intermediate -p render.render_image_func=@full/render_image LOG_DIR='outputs/seaice6/0/logs' execute_tasks.frame_range=[48,48] execute_tasks.camera_id=[0,0] execute_tasks.resample_idx=0

blender will use GPU and rendering is significantly faster (25 min on GTX 1070 vs. 4 hours on Intel i7 6700 3.4G)
I don't know why tools/manage_datagen_jobs.py is not working for GPUs. What is difference between running tools/manage_datagen_jobs.py and directly executing commands in run_pipeline.sh?

Update

I finally figured out why tools/manage_datagen_jobs.py is not using GPUs for rendering.

The culprit is local_16GB.gin has this line LocalScheduleHandler.use_gpu=False. This will turn off GPUs.

So if you, like me, want to do all other steps in CPU but only rendering on GPU, do this:

  • Create a enable_gpu_rendering.gin file in tools/pipeline_configs/ and put the following in it:
LocalScheduleHandler.use_gpu=True
  • At command line, call something like this:
python -m tools.manage_datagen_jobs --output_folder outputs/hello_world --num_scenes 1 
--pipeline_configs local_16GB enable_gpu_rendering monocular blender_gt --specific_seed 0 --configs desert simple

This will do coarse, populate, and fine_terrain using CPUs but short (rendering) on GPUs.

For my machine which is very old, this is the only way to get a decent performance out of it. In particular, cycles render is very slow with CPU only. But switching to CUDA really made a difference, even with a generations old GTX1070.

If you have beefy GPUs (3090/4090), turn on enable_gpu to accelerate terrain generation as well.

Thanks to @badgids for LocalScheduleHandler.use_gpu=True tip.

@WellTung666
Copy link

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory.

image

image

I also have the same problem,how can i solved it.

@luoluoluooo
Copy link
Author

My GPU is nvidia-RTX3090. I have already installed Cuda, and there is also a compilation package for Cuda when using install.sh. And also enabled on the command line to enable_GPU, but when I render, the speed is still very slow, and the program in nvidia-smi does not occupy graphics memory.
image
image

I also have the same problem,how can i solved it.

Perhaps this is not a problem. In some stages, the program will use GPU acceleration, in other stage,the program is not。

@araistrick
Copy link
Collaborator

You should expect to see GPU usage briefly during the fine_terrain stage, and for a decent duration during any rendering stage. 0 GPU usage during coarse/populate stages is expected and typical. Confusion RE LocalScheduleHandler.use_gpu will be cleared up via PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants