-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple OBJ models rendering #9
Comments
Hello @EtagiBI Thank you for posting your question here - I'm sure there are others out there who are wondering the same about the ability to load multiple OBJ files, and possibly MTL files as well. The short answer is unfortunately, not at the moment. I currently have a demo that loads a single OBJ file. So my path tracer has the raw ability to load a list of vertices/faces in OBJ file format, and path trace the mesh relatively quickly. However that Crane model in the demo only has 36 triangles. Even that amount lowers the frame rate on my laptop from 60 to 40 fps. Things get worse if you, for instance, go into the HTML source code and comment out the Crane model and choose another listed model with a higher poly count. So if you and your friend are trying to load rooms full of thousands or even hundreds of triangles, the frame rate will probably slow to a crawl, like 1 or 2 fps, or even lose the WebGL context. Users would not be able to smoothly look around your room with their mouse, and would probably get frustrated. Now for the long answer, if you're interested ;-) When I started this project I was naive in thinking that if I could make geometric mathematical shapes render at 60 fps, then I'll just eventually throw a list of triangles at it and it would be the same. But sadly, checking rays against every triangle in a detailed mesh really adds up. And that's just for the initial raycast - you have to multiply that cost by 4 if you want to bounce those same rays around the room to get global illumination. So I looked into acceleration structures like grids, KD trees, and the BVH - the latter seemed to be the preferred choice for ray tracing and path tracing, so I ported a BVH builder in C++ over to Javascript. This seems to help a lot, BVH attempt , but the problem is that WebGL 1.0 does not support dynamic indexing of arrays during a loop, such as correctBranch[x]. In WebGL 1.0, that 'x' must be a constant like '2' or a predefined value. That means that I can't prune large parts of the tree, which is the whole point of having an acceleration structure in the first place. When WebGL 2.0 is supported inside three.js (hopefully soon), then I can revisit the BVH with dynamic indexing allowed, and even look into the LBVH which uses Morton codes and bit manipulations (allowed only by WebGL 2.0), which can rebuild the entire structure on the GPU for a scene of moving, dynamic triangles - every frame. I would love to be able to list a bunch of meshes like you and your friend want to do in the room scenes, and have it all just work. But getting it to run smoothly is the most difficult aspect of ray/path tracing I'm finding out. Traversing the BVH is not too hard to understand, and it can be done in a tight loop inside a GPU shader in 30 lines or so. What is difficult to grasp and implement on the GPU, is the BVH builder, which must quickly load thousands of triangles into texture memory, create bounding boxes and indices for all of those triangles, and then order them in some efficient fashion to be examined and pruned by the tracer. Unfortunately, GPU BVH builders and source code (even in CUDA, let alone WebGL) are poorly documented and explained. It's no coincidence that a lot of cutting-edge research has been done in this area, and if good results are found, they are assimilated into existing renderers that might be behind patents and not available for public viewing of the GPU source code. However, maybe this will change with more people out there like me trying this stuff out on their own, and posting it online for all to learn from. It's frustrating because I know interactively rendering large amounts of triangles is possible, even inside seemingly low WebGL. For instance I found this cool piece of software, which is similar to my project, but has much more capabilities in terms of rendering large meshes: Antimatter . Scroll down and hit the 'Launch Prototype' button. Then you can choose from a list of large meshes. If they can do it in the browser with a BVH, it has to be possible. I would have said you guys should go with that for your project, but it looks like it is going to be for purchase only, and not open source - plus I don't know how deep into needing three.js in your project you already are. I wish I could be of more help to you both, I hope you can find a different approach or a workaround in the meantime. If you would like to ask anything else or need clarification or advice, please don't hesitate. -Erich |
Wow, thanks for a thorough reply! I forgot to mention that we want to render our scenes server side, not in real-time. The whole plan looked like this:
Is it possible? I clearly understand that real-time rendering of multiple textured models is really heavy for resources. |
Hello @EtagiBI |
Hi again @EtagiBI After thinking about it for a while, unfortunately I don't think the tools that I have developed here can help you at the moment. What you are needing is a robust loader, simple editor, and an offline beauty renderer. What I have here currently is a non-robust, simple shape renderer that goes as fast as possible so you can move and look around in real time, but still have correct global illumination effects. I started the project with the latter in mind and geared every line of code towards that purpose. I would have to essentially start over with what you have in mind to really do it right, instead of trying to hack something robust (loading .obj files, textures, etc.) into the blazing fast GPU path tracer that was made for different purposes. But I really like the idea that you have, which is, if I understand correctly, to use three.js and the browser as a visualizing tool to load .obj files into a simple editor where you can click and move furniture around a virtual room using the three.js default WebGL renderer at 60fps. Then when you are happy with how the room looks, you hit 'Render' and it dumps all of that data (a big list of triangles, vertex normals, texture uv's) into the path tracer, which ray casts against those triangles, finds the closest intersection, looks up the normal and uv data for that correct triangle, and colors that pixel. When all pixels have been calculated, then it saves the final render as a .png or something. That part would take several minutes without acceleration structures, but you mentioned that you don't mind waiting offline for it to finish the calculations. I might go off and attempt a side project of my own that loads a whole scene of .obj files and then renders offline because this sounds like a neat project idea, but mine would be minus the editor part, that is a whole other tool set that would take weeks to develop - editor . It just occurred to me that someone has already done what you are wanting: Ben Houston and his Clara.io project. Here's the link: Clara.io Ben is a nice fellow who has contributed a lot to the three.js code base. He wants his browser based editor/renderer/modelling software to be able to compete with 3dsMax, Maya, Blender, etc. I hope he and his team succeed because having the ability to be online and collaborating real time while using a sophisticated 3D modeling package is a great idea. In a nutshell, his software loads meshes of any file type (.obj, .fbx, etc), uses three.js renderer in real time to drag, reshape, resize the scene and meshes, then you hit fast preview and the sophisticated V-Ray rendering farm jumps into action. If you like the preview, you hit full render, wait, then hit save image and you're done. It's free to try out, you might give it a go - if only to get some ideas for your project. Sorry I couldn't be of more help with my tools in this repo, but hopefully you have an idea of what needs to happen in your software, and hopefully I at least pointed you in the right direction. If you like, you can post any future findings or breakthroughs, or links to your project here so we can all benefit. -Erich Edit: just found a perfect resource that does what you are wanting and is open source: XRay |
Hello @erichlof! I'm sincerely impressed by amount of useful information given by you. Thank you very much! I'll try to keep this topic up-to date. |
Hi @EtagiBI , Glad to be of help, sorry I couldn't have my existing project just work out of the box for what you wanted. However, you have inspired me to start planning ahead and working towards being able to call the THREE.OBJLoader on various meshes, and then they get their data get saved to a texture to be consumed by the GPU and path traced. Earlier today I just figured out how to let the THREE.OBJLoader do all the .obj file parsing work (I task I kind of understand but not completely), and then 'hijack' the newly created THREE.Mesh and open it up and read and save its geometry data to a texture. Next is to try to do the same with materials data from a .mtl file. I'll post a short demo here soon! Thanks :) |
Hi @EtagiBI I understand the .obj and .mtl file formats now (they were well designed which is why they have lasted so long I guess! ), but I'm trying to decide how to retrieve the material data and then make physical ray tracing materials out of them. You can see from the demo, I just assigned a matte bright purple color to all the triangles of the crane origami .obj model. But suppose there was an accompanying .mtl file that said they wanted the crane's neck to be transparent blue, the wings to be shiny silver, and the body to be matte gray or something? For the path tracing engine to use that data, it needs to have its physical reflectance properties as well as a color and shininess or color and transparency. So the matte would be the easiest, I could just assign that to be Lambertian diffuse in my engine, the shiny one would have to have a metalness flag saying if it is a metallic specular object, or just a shiny piece of plastic. The transparent one would need an Index of Refraction (IoR) saying how much the rays should bend when they enter a transparent surface like glass, clear coat plastic, or water. So I am kind of going back and forth as to how I should implement the materials loading part of all this. I'll give you more updates as I work towards a solution. If I can eventually get the BVH working with WebGL 2.0 supported in three.js, then all objects will load and render in real time! |
Excellent news! Since three.js has obtained shader support, we're trying to implement native shadows for models. |
@EtagiBI Hi, How is your project going? I also want to do similar projects. Any suggestions? I understand a lot of open source renderers, and I'm ready to try |
@FishOrBear Yes, we decided on Blender Cycles as well. Render quality is great, but it took a while to adjust parameters. |
@EtagiBI I understand that some of the Cycles documentation is missing. I found that the |
Hi @erichlof any news about? I'm very interested in your project... but to do online client background rendering (no realtime) of Three.js scene created online loading OBJs and applying custom materials dinamically (no MTLs involved). It would be nice having more OBJs and three.js support, before Febrary 2019... Thankyou |
Hi @meblabs Therefore I needed some kind of data acceleration structure for randomly sorting through all those triangles, and the only way to do that is to have random access into large arrays, which is not supported by WebGL 1.0 (which is still only supported by three.js). WebGL 2.0 brings with it the possibility to look up data randomly inside an array through the GPU fragment shader, which I absolutely must have in order to continue with that part of the project. If wait-time for rendering is not a concern, you could go with a CPU renderer, which is not accelerated of course, but definitely has the capability to sort through huge amounts of OBJ triangle data pretty quickly. It would just render a static image though, which is not the focus and direction of my project. The only other alternative I can think of is that you create something with three.js, use their converter to convert the entire scene to a readable file by a production renderer, like Octane for Cinema4D, Cycles for Blender, V-Ray for Clara.io, etc. and hit the render button inside their software. It should be the best of both worlds, able to be accelerated somewhat with their proprietary acceleration structure, and able to handle huge amount of scene data through streaming. Sorry I can't be of more help. Best of luck to you with your project. Please let us know if you find a temporary solution with other software, as I have had some similar questions and requests for larger amounts of data rendering. |
Check it out: http://www.zentient.com/
Loading obj in shader via texture (bvh)
… On 4 Sep 2018, at 17:05, Erich Loftis ***@***.***> wrote:
Hi @meblabs
I'm sorry but I'm kind of at a stand-still with OBJ rendering until the folks at three.js implement WebGL 2.0 support. In order to render larger scenes or multiple OBJ files, I need to physically load the triangle data onto the GPU, and the only way to do that currently is through a data texture. I have successfully done it with one small OBJ file, see the simple OBJ demo - but as soon as I try loading multiple files or 1 large OBJ file (the Utah teapot with 1000 triangles, for instance) the renderer slows to a crawl and crashes or just fails to compile.
Therefore I needed some kind of data acceleration structure for randomly sorting through all those triangles, and the only way to do that is to have random access into large arrays, which is not supported by WebGL 1.0 (which is still only supported by three.js). WebGL 2.0 brings with it the possibility to look up data randomly inside an array through the GPU fragment shader, which I absolutely must have in order to continue with that part of the project.
If wait-time for rendering is not a concern, you could go with a CPU renderer, which is not accelerated of course, but definitely has the capability to sort through huge amounts of OBJ triangle data pretty quickly. It would just render a static image though, which is not the focus and direction of my project.
The only other alternative I can think of is that you create something with three.js, use their converter to convert the entire scene to a readable file by a production renderer, like Octane for Cinema4D, Cycles for Blender, V-Ray for Maya, etc. and hit the render button inside their software. It should be the best of both worlds, able to be accelerated somewhat with their proprietary acceleration structure, and able to handle huge amount of scene data through streaming.
Sorry I can't be of more help. Best of luck to you with your project. Please let us know if you find a temporary solution with other software, as I have had some similar questions and requests for larger amounts of data rendering.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Thanks @erichlof ... so let's wait for a three.js upgrade to WebGL 2.0 Exporting data to another offline renderer is not an option, since I need to render client side and online. But thankyou for your explanation. When you say that the scope of your project is not a static rendering, please consider that your work is the best I've seen here, speaking about pathtracing etc., the tech you built can be used in a real context, so I would consider to simply generate a client-side hi-res and quality (but slow) render, other then the super fast real-time rendering that is awesome but more for science. So I hope you'll consider my suggestions when the three.js upgrade become ready... See you next months ;) |
@mrboggieman thankyou! But I've already seen that project... the problem is that it'isnt opensource... I can't find a repository. And the work done by @erichlof seems better, regarding the final render quality after many samples... and AntiMatter is not Three.js based :( |
@meblabs I'll let everyone know if I can gain any ground with WebGL 1.0 while I'm waiting for WebGL 2.0 support. |
@erichlof wow that's great! if I were you, I'd wait for webgl 2.0, but with WebGL 2.0 it would be awesome to also have a working PBR shader, simply configurable (like the one included in Three.js), and that supports at least a simple texturing method... Let me know if it is possible or a dream! I can wait some months :) Thankyou again! |
@meblabs To get PBR data on top of that, we would need to pack the data into a larger structure inside the texture, such as: dataTexture[3,0].r = 1.0; dataTexture[3,0].g = 1.0; dataTexture[3,0].b = 1.0; dataTexture[3,0].a = 0.5; dataTexture[4,0].r = 0.0; dataTexture[4,0].g = 1.0; dataTexture[4,0].b = 0.5; dataTexture[4,0].a = 0.25; So it can be done, but as you can see there needs to be a lot more texture data slots (around 20) for each triangle. The final texture size would need to be the number of triangles in the model, times 5 rgba slots per triangle. So roughly calculating, if we had a 10,000 triangle model, that's 50,000 .rgba texture slots (200,000 total floating-point data entries packed in). Which seems like a lot, but most cell phones even can handle a 4096x4096 .rgba texture, so that's 16,777,216 available texture elements, each with their own .rgba channels (4 values for each texture element), so 67,108,864 possible triangle data entries packed into 1 texture. edit : argh, I forgot uv texture coordinates need to be specified for each of the 3 vertices, so that's 6 more data numbers to pack in, maybe pad it to 8 numbers (2 .rgba texture elements) to be memory look-up effecient, and use the 2 remaining slots for Texture ID (which texture to use), extra texture info, etc. So I kind of know how I would approach it, but getting materials to change on the fly inside some kind of material editor would take a lot more work and know-how. That's why they have teams of 10, 20 or more working on OTOY Octane, Blender Cycles, etc. I don't know if I could do all that alone, it isn't really part of the scope of this project. But it definitely is possible and has already been implemented with more sophisticated renderers. Hope that helped! :) |
@erichlof wow seems a little complicated... But when your base tech will be ready, I can work on like a monkey and you could tell me what to write :D (and I may pay other monkeys to help me) |
@meblabs I'm still looking at the AntiMatter source - I might post some of it here - I have gone in and renamed the minified variables and functions so instead of reading 'float m=m0(g, h);' it reads 'float result = calculateFresnel(vec3 intersectionPoint, int material_ID);' I'm still looking into it. :) |
@erichlof thanks and good work... let me know if I have to do something in future ;) |
Hi @meblabs and all, And after going through his code line by line, here is my decompressed version: antimatter.glsl I reverse-engineered and added clear, meaningful variable names and function names, for maximum readability. There are a lot of similarities between his path tracing shader and mine here on my repo. I added a comment to the function I am in the process of decompressing the index antimatter.js file that does all the WebGL initialization stuff and creates the BVH to data texture for GPU consumption. This part is even more complicated than the shader intersectBVH function. Creating the acceleration structure is always more difficult because you have to order all of the thousands (or millions) of triangles, vertex data, and material data, sort them, and compress them onto a texture in a meaningful way. But hopefully by studying his source I can make some headway. ;-) |
@erichlof ahaha great work! Regarding BVH, you should already have your implementation done, right? Or yours is not for traingles, but for primitives only? |
@meblabs Regarding the BVH, I have a basic triangle BVH builder that works. Typically you can survive without a BVH if you are just intersecting mathematical shapes likes spheres, cylinders, cones, boxes, etc., that's why you see everyone's beginning ray tracers rendering only those type of shapes. When you get into loading models though, it requires many triangles to be searched and intersected (because in the graphics world, most 3d models are represented as triangles), so that's when you need to introduce a BVH. I ported a C++ BVH that I found on GitHub (I gave credit to the author in the comments) to JavaScript. So it produces a list of bounding boxes and a list of raw triangles (vertex position data) in 2 different textures to be loaded on the GPU. The roadblock I was hitting was not being able to maintain a working bounding box stack on the GPU (a WebGL 1.0 limitation) so that I could search through the stack with dynamic array indexing and intersect it. I don't know if there will be any performance difference between employing the Antimatter webgl 1.0 workaround, or just using array indexing in webgl 2.0. That remains to be seen. I'll keep you updated - I'm about to try his hack and stick it into my code and just see if it works at all. :) |
@erichlof yes I know the history of 3d graphics, I found you just after the nVidia keynote... searching "webgl raytracing"... I thought that if nVidia starts to deliver RT dedicated hardware, some mad boy could have a solution to implement it in Three.js :D ... and let's see when webgl will support the new hardware ;) Ok I'll wait for good updates.. thank you! |
Hey you did it man!! 🎉🎉🎉 |
Hi @meblabs and all , Yay, initial success! (well, for the most part anyway). Wouldn't you know it, the very day I get the WebGL 1.0 BVH shader code working is the day that three.js releases version r97, which has initial support for WebGL 2.0! Haha So now I'm at a fork in the road. The WebGL 1.0 version works great for lower than 800-triangle models, but as soon as I tried heftier models of just a little over that, say 1000 triangles, it either will not compile, or I can see the rendered model for a brief moment (in all its glory) before the frame rate drops to 0, the browser tab crashes, and the WebGL context is lost, so you have to close and reopen the tab which is annoying. I have a couple of ideas of why this is happening - one is all those 'if' statements inside the workaround functions for WebGL 1.0's limitation of no dynamic array indexing. GPUs do not like too many branching 'if' statements - if you throw too many at it, the shader just crashes, or won't even compile. The only glimmer of hope is that maybe I overlooked something because antimatter.js was able to load thousands of triangles with WebGL 1.0, and I am using the same workaround functions that I manually unpacked from the minified file. The other fork I could take is just to abandon the WebGL 1.0 workaround and go with WebGL 2.0, getting rid of all those 'if' statements in the workarounds. This sounds simple, just call WebGL 2.0 renderer from three.js setup code - but it is more involved. Since WebGL 2.0 uses OpenGL ES 3.0 (I know confusing right?) all of my established path tracing shader code on this repo will not compile right away. I have to manually go in and change stuff like 'attribute' to 'in' and 'varying' to 'in' and 'out', among other things. Here's a nice list of TODO's to get WebGL 2.0 working: WebGL 2.0 from 1.0 So once I can get WebGL 2.0 working, I can see if dynamic array indexing under WebGL 2.0 helps my crashing issue. As always I'll keep you all updated with my progress. Sorry it has been quiet around this repo the last couple of months, I have been wrestling with BVHs and WebGL 1.0 limitations and the like. :) |
ahaha... I read about the crashes... may be too many calls/recursions with 1000+ triangles? Great work! |
@meblabs Yeah I don't know yet why the WebGL 1.0 version crashes after 1000 triangles. You mentioned recursion call amount, but actually no GPU shaders allow recursion of any kind, hence the stackLevel[x] approach. So you push and pop all the various branches as you descend the BVH tree which is why I needed dynamic array indexing. Now on the CPU side, it loves recursion. So that's why I went the recursion way with the Builder.js file. That uses JavaScript and is strictly CPU-side. I wish though that GPUs allowed recursion, that would make things much simpler! ;) I'll keep investigating but in the meantime I'll try the WebGL 2.0 way, which has more promise. |
@erichlof, I'm amazed by your progress! |
@MEBoo Those demos have exposed point and spot lights, and the older demos have huge spheres hanging in the air or big quad area lights (like the museum demos). Things work well in those idealistic lighting conditions, but once you try to render an apartment or bathroom with recessed lighting, the noise returns big time. The only solution to this indoors problem that I can see at the moment is bi-directional path tracing, so I am revisiting some of that old code to see if I missed any optimizations. |
@EtagiBI |
@erichlof nice new demo!! |
@MEBoo |
@erichlof Hey just saw the moveable BVHs with models!! Don't know how you did but it's very fast! So a real spotlight, low light, hi-poly model texture+other maps on a BVH updated and moved runtime!! |
@MEBoo Now, I've been struggling with figuring out how to do this with a mesh skeleton, bones and animation, which actually move the mesh vertices in real-time on the GPU vertex shader. I thought I could go down the bone hierarchy and transform the ray by the inverse of each bone, but it turns out to be a little more complicated than that, because of weighting and skinning deformations and such. But I will post my findings if I get something working with simple animations. About the accumulation of samples, it's actually just 1 sample over and over again, but I let the background scenery (that which is not actively moving) 'bleed' a little more from the previous frame. So there is a little more motion blur effect on the ground and walls, but it is not distracting because those things are static. 1 old sample bleeds more into the new sample, so it's like having 2 samples for a split second I guess. On the dynamic objects, or when the camera is moving, I manually turn down the 'bleeding' from the previous frame, in order to minimize distracting motion blur that would occur if I did nothing about it. It is a delicate balance between smooth motion blur which covers up distracting noise, and moving objects which you want to be more crisp and clear without too much distracting motion blur. :) |
@erichlof nice... understood! mmm don't know why you are working on IK/animations ... it's a big world apart! don't know if it is GPU based... |
@MEBoo |
@erichlof Hi!! |
@MEBoo Hi! I am going to try adding multiple model files to the Difficult lighting demo, the one with the slightly cracked open door and the 3 objects on the coffee table. In the original, those are supposed to be 3 Utah teapots with 1000 triangles each and different materials for each one. The current demo has ellipsoids, but I always wanted to have the 3 classic models in there, and now I have the means to add them I think. That will be step 1 to getting multiple BVH objects in. Step 2 will be a BVH for the BVH's! |
Hi @MEBoo and @EtagiBI It is finally starting to look like the original classic scene by Eric Veach in his seminal paper! I realize now that I could have done some trickery with offsetting the casting rays (or how instancing is done in ray tracing) since all of the objects have the same shape, and I just might do that for the final demo - but this is actually doing it the hard way for proof of concept: it loads a teapot of 4,000 triangles, makes its BVH, uploads it to the GPU for path tracing, then loads another teapot of 4,000 triangles, makes its BVH, uploads to GPU, then loads yet another teapot of 4,000 triangles, makes its BVH, uploads to GPU. So in the end, we have 12,000+ triangles spread between 3 models, each with their own BVH and materials, as you can see in the image. Now for the not-so-good news: If you look at the top left corner framerate, it has gone down by half. This demo used to run on my admittedly humble laptop at 50 fps, now it is at 25 fps. Still real time and interactive and amazing that all this is happening on a freakin' browser, but nonetheless not as fast as I was hoping for. It is safe to say, that adding more objects would eventually grind the shader to a halt. So I will continue exploring ways of first of all getting it to compile on the first time every time, and then increasing the framerate (which is less crucial, but would be nice). As always I'll keep you guys updated. I just wanted to share the initial success (tinged with a little failure, lol) and finally progress this epic thread! Sorry it has taken this long to get to this point, but other avenues I have gone down have helped get this multiple OBJs feature started and hopefully improved! :-) |
Nice news and milestone 🥇 !!! 3 questions:
Afterall, I think that this is the tech of the future... but you can't dream about having a real-time real application for now. But now we can have a "background" client photo-realistic rendering engine 😉 |
@MEBoo 1: Well yes and no. Somewhere along the way, I think it was a couple of months ago, I decided to support .gltf and .glb (gltf in binary format for faster transmission) and remove the examples of the .OBJ files and other formats. The reason is twofold, first the .OBJ is heavier and less compressed than .glb. And second, .OBJ is an old format so even though I can extract the three.js data from the three.js created mesh when it loads, three.js does not know how to insert PBR materials into that old format, and there's no way for authors to define those types of materials in the old format when they create them in 3dsMax, Maya or Blender. GLTF on the other hand natively supports textures of all types like metalness maps, and physical materials like glass with IoR specified by the author, which I in-turn absolutely need to load into my path tracer. I know this decision might leave out some models that we have lying around, but the good news is that free websites like ClaraIO are able to convert any file type into GLTF for faster web transmission and native PBR support. In fact, you can load [insert your favorite format here] into clara, then ADD free pbr materials that are ray-tracing friendly, then save the whole thing, and hit 'Export All' gltf 2.0 and you're done. That's exactly what I did for 90% of the demo models on this repo, they were originally in another format. This decision makes my life a little easier by reducing the corner cases and codesize of handling the three.js Mesh after it has been loaded by an arbitrary unknown-in-advance format. This way I can either intercept the gltf data myself (it is in a human-readable format, the material stuff anyway) or wait further down the pipeline and get everything from three.js's correctly created Mesh with ray-tracing friendly materials and specifications (which is what I'm currently doing). Of course you could try this whole process with three.js's FBXLoader for example with some minor modifications to my demo code, but then again, I want to only think about 1 format that is built for the web, works with three.js, supports animations, and has modern detailed material specifications. 2: I ran into the 1st-time fail, 2nd-time pass compilation problem back when I created the CSG museum demos a while ago. That's why there are 4 separate demos. Initially I had all 14 CSG models in the same museum room, but it wouldn't compile at all. Then I reduced it by half to 6 or 7, then it compiled on the 2nd time only. Then I split it further into 4 demos with 3 or 4 objects each, and it compiles every time. I think it has to do with the amount of 'if' statements you have in the GPU code. CPUs love 'if' statements, GPU's - not so much! If you have too many branches, it crashes. It must not like all the 'if' ray hit bounding box on all the models - some parts of the screen have to traverse the BVHs, and some parts of the screen get lucky and hit an easily-reflected surface or wall, which also partly explains the framerate drop - GPU thread divergence. 3: Which ties into the performance drop - yes I think it is because of different models, GPU divergence and branch statements. I don't believe the triangle count has much to do with it. Take a look at the BVH_Visualizer - it handles 100,000 triangles at 60 fps on my humble machine (of course if you fly into the dragon model, the frame rate goes down, but for the most part, it doesn't even break a sweat). So there are a couple of things to try in the near future: A BVH for the BVH's (but in this simple case of 3 similar teapot objects, I'm not sure if that will help any), and like you mentioned, combine all 3 teapots into a super-teapot type shape and place a BVH around 12,000 triangles. That might work better. Also, in my last post I mentioned 'trickery' - you can actually do a modulus on the ray origin and treat it as multiple rays, and therefore it would return hits for 3 objects (like copies for free), even though you only load 1 teapot into the scene. This is a little more advanced and just a tad deceitful (ha), but something I want to try eventually - for example, a forest with thousands of trees seen from a helicopter view. |
Wow .. Hope you will find the way |
|
@MEBoo |
@erichlof yes I already do this (material change) on my little project... In THREE materials / meshes / groups have names! So I can parse the scene and do whatever I want. |
@MEBoo |
@MEBoo that will be the functionality of the glTF viewer I'm working on, which will be merged into this repository when it's done. Currently it loads multiple glTF models into a Three.js scene, then you can call another function that will read the scene and prepare all models for path tracing. In the near future I'll make it so that users can drag & drop new model(s) into the viewer to replace the old ones and call the same function for the new models. |
Success on multiple fronts! Not only is it compiling correctly and loading every time (without crashes), I got the frame rate up to about 30 fps, which is still real time, and I even applied a hammered metal material to the steel teapot on the left! Now it looks almost exactly like Eric Veach's original rendering for his Bi-Directional path tracing thesis. :) MEBoo, I achieved the compiling every time by still loading the 3 teapots separately (so it could be any type of model, any number of models, different models from eachother, different amounts of triangles of each model, whatever you want), but then before uploading to the GPU, I merged the triangle data into one texture that still fits comfortably on a 2048x2048 three.js DataTexture. That way the shader doesn't have to read from 3 different geometry data textures (which was causing the crashing and slower frame rate), but just reads from a larger 'uber' scene data texture. I guess it's fitting that this post is the 100th post of this epic thread! I think we can safely close out this topic. ;-D |
@erichlof congratulations! let's close the thread :) the last question, how many vertices/polygons can handle the 2048x2048 texture? Now...could you build a demo with a full scene of an interior space using many polygon models and many lights? Something like n2k3 showed us, but with furnitures and textures? |
@MEBoo About the room demo, yes that would be the ultimate if we could have any number of lights and a lot of triangles in the scene at the same time. n2k3 is defintiley pushing it forward in the right direction, his awesome apartment model has 100,000+ triangles and a cool loading animation. Let me merge his project and then maybe we can work towards that by adding numbers of lights. After adding all that, it will probably only run at 10 fps in the browser, but hopefully it will run, compile every time, and at least be somewhat interactive. :) |
@erichlof good & cool!! Maybe will run at 10fps today... but hey this project is for the future ;) |
@MEBoo I'm closing out this issue now, but you can still reply to it if you like. :D Thanks to all involved! |
@MEBoo I'm here! Couldn't test these days :( |
@MEBoo Well it's comforting to know that users with better systems will achieve real time results heading toward 60 fps, rendering 100,000 plus triangles, all inside a browser! That's also a nice confirmation and testament to the BVH builder and transversal code that was inspired by previous authors and custom fit for this project. Thank you for posting the results! :-D |
Hello,
First of all, I would like to thank you for all your efforts! Your project seems to be the only one alive project dedicated to Three.js photorealistc rendering.
As for my question, is it possible to render multiple OBJ models with your PTR? My friend and I are working on a Three.js based room planner as our university project. We have a bunch of textured furniture models with different material determined via corresponding MTL files. Is this feature supprted by PTR? As I can see from your demos, at the moment PTR works with simple shapes only.
The text was updated successfully, but these errors were encountered: