-
So I have my image: pub fn create_image(width: u32, height: u32, depth: u32) -> Image {
let mut image = Image::new_fill(
Extent3d {
width,
height,
depth_or_array_layers: depth,
},
TextureDimension::D3,
&[255, 255, 255, 255],
TextureFormat::Rgba8Unorm,
RenderAssetUsages::RENDER_WORLD | RenderAssetUsages::MAIN_WORLD,
);
image.texture_descriptor.usage =
TextureUsages::COPY_DST | TextureUsages::STORAGE_BINDING | TextureUsages::TEXTURE_BINDING;
image
} Then, I load it as a // ...
#[derive(Resource, Clone, Deref, ExtractResource)]
pub struct SimulationResource(pub SimulationImage);
#[derive(Clone, Deref, AsBindGroup)]
pub struct SimulationImage {
#[storage_texture(0, dimension = "3d")]
pub image: Handle<Image>,
}
// ... This works fine, when I am using a 2D image, because I can easily view it with a camera, that has a different render target. But how could I access the 3D image? Of course, just accessing the image with a system does not work: fn test(
simulation: Res<SimulationResource>,
mut assets: ResMut<Assets<Image>>,
) {
if let Some(sim_img) = assets.get(&simulation.0.image) {
// print 4 pixels
println!("First 4 pixels: {:?}", &sim_img.data[0..4 * sim_img.texture_descriptor.format.pixel_size()]);
}
} This will just give me 4 white pixels:
So I am very new to rendering in general and bevy's render system and I just followed along this Game of Life Example. I understand, that this might be unsupported from the bevy site of things, but I was just wondering, how one could implement the Alternative SolutionI could just put all layers of the image side by side in a row, but that wouldn't be as efficient since it would be a lot more costly to access a pixel above another (you had to get a pixel, that is maybe 1000 pixels more to the right). Then, everything could be made just with a 2D image. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
The problem is that copying from gpu to cpu is much more rare than copying from cpu to gpu. Copying from cpu to gpu has its own |
Beta Was this translation helpful? Give feedback.
-
Thank you @bugsweeper! Yeah, I also saw this example. I decided to use the gpu readback example from bevy instead of the headless renderer example, because I am using 3d images and with buffers I have a much more freedom with the format and the render passes, etc. So now I just write from the compute shader to the buffer instead of applying the changes directly to the texture before copying and sending the buffer to the gpu. I wonder, if it is better to then copy the buffer back into a new image to the gpu or use 2 discrete images, which live on the gpu and are getting swapped out. I will use the last method for now, just if anyone sees this. |
Beta Was this translation helpful? Give feedback.
Thank you @bugsweeper! Yeah, I also saw this example. I decided to use the gpu readback example from bevy instead of the headless renderer example, because I am using 3d images and with buffers I have a much more freedom with the format and the render passes, etc. So now I just write from the compute shader to the buffer instead of applying the changes directly to the texture before copying and sending the buffer to the gpu. I wonder, if it is better to then copy the buffer back into a new image to the gpu or use 2 discrete images, which live on the gpu and are getting swapped out. I will use the last method for now, just if anyone sees this.