-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About laziness and asynchronous loading #755
Comments
Possible loading strategy (just thoughts):
glTF asset author may want to force a particular loading strategy, so we can think of possible hinting extension for that.
I think runtime could assume that in general case (btw, glTF-pipeline has a stage to remove unused objects). |
Option 3.iii is certainly the first step down the rabbit hole of possible optimizations and custom implementations. One could imagine sorting the meshes (and thus, the tasks to load their resources) based on their distance to the viewer, one could do occlusion tests to see (sic) which meshes have to be loaded and rendered at all, one could priorize the meshes in a queue based on the approximate screen space occupation (computed from the bounding box of the The main difference, from a high-level perspective, is between 3.i and 3.ii. In the first case, 3.i, the infrastructure is trivial: In the second case, 3.ii, the implementation needs the ability to defer the rendering, e.g. of a (Side note: I also wondered about this in the context of the gltf tutorial. The first one would be far easier to explain, but could lead to undesirable implementations in practice. And changing the implementation afterwards would be a hassle...) The second approach also includes the question of whether the loading tasks are scheduled during an initialization, or when the resource is encountered for the first time. (Overly suggestive pseudocode ahead) Initialize all resources, and render only the elements whose resources are already loaded:
Try to render, and initialize resources when they are required:
But for the case that a glTF is "minimal", this might only be an implementation detail (and I'd have to think about whether one approach has notable pros/cons compared to the other - also keeping in mind that the OpenGL initialization has to take place somewhere) |
Just commenting with a "keep it simple" suggestion here, you probably shouldn't go too far down the option 3.iii "rabbit hole" and just presume that all of the glTF's meshes must load before the glTF can be displayed. For large complex models, like whole cities, there is a 3D Tiles standard that is on track with the OGC to become an open standard. 3D Tiles use glTF files as the payload of each tile, and adds the mechanics needed to manage models that can't be loaded all at once. So, a single glTF file is either a small whole scene, or a small portion of a scene, and should be loaded atomically (and asynchronously when possible/applicable). Larger scenes are constructed out of multiple smaller glTF files. This is all my own interpretation of course, let me know if there are counterpoints to this. |
You may want to check this out: https://github.com/fl4Re/rest3d-new Regards, -- Rémi
|
@RemiArnaud In order to properly understand the details of the rest3d-new approach, I'll first have to become more familiar with the related concepts (threejs, websockets, ...), but hope that I undestood the goals correctly. I had seen the BOF presentation before, and am aware that there are several approaches for various forms of streaming (some more are mentioned in the BOF). One could try to classify them, roughly: Whether they refer to streaming...
The main advantages of the sort of asynchonicity that I originally referred to here (which is still the most simple one) would mainly apply for scenes that consist of many small models (meshPrimitives), each consisting of their own, small buffers. For scenes consisting of one large object, one would have to develop different approaches. So although glTF is designed to be able to describe complex scenes, I think that most glTF assets will still be small enough so that they are loaded completely in a few seconds (and the "large" ones will require a dedicated infrastructure anyhow). So @emackey : I think that it is true that most glTF assets can be considered as as "atomic" in this sense. (One could precautiously try to cover future developments along the lines of #37 and the issues that are linked to that one, but AFAIK there are no specific plans to bring this into core yet) So I think that it is reasonable to load all resources of a single glTF asset synchronously, considering the many options and open points for various forms of asynchronous/streaming tansmission, and especially considering how much simpler a basic viewer is in this case. |
@javagl Agreed, but one minor nitpick on terminology:
On the web, "synchronously" is a bad word because in the old days JavaScript would lock the browser's UI thread while awaiting a large reply from the server. Synchronous requests are now deprecated. So, loading a glTF online will always be async, with the UI and the JavaScript app free to continue running while the server gets its act together on a large asset. A non-embedded glTF may have many server-side assets (textures, bin file, shader files) that must be individually requested from the server by the glTF reader. Internally, the glTF reader is responsible for tracking one or multiple async server responses. But more importantly, the client code that called the glTF reader only sees a single asynchronous action (one promise, or one callback, or one modelReady event fire) that means the glTF reader has finished loading their asset and all of its parts. The glTF changes from "not available at all" to "textured and ready to render" in that one event. This is how it works today in Cesium and I'm pretty sure Three.js and the others do likewise. |
Thanks, indeed "synchronous" here was wrong, but should actually mean exactly what you described - for me (as a JS novice), this will likely boil down to creating one I already had some glances at the three.js loader and Cesium. In some cases, it's hard to figure out which functionality belongs to the "core" of the library or its loader infrastructure, or the glTF loader in particular (or to one of the plethora of (external) JS libraries that seem to be involved everywhere, even for seemingly trivial things). However, I think that an implementation of the basic core functionality of a glTF viewer can be rather straightforward, even with plain JavaScript+WebGL, iff there is no fine-grained asynchronous loading involved: The actual viewer can simply receive the whole glTF, plus the buffers/images/shaders, and just display it. (Of course, the "advanced" features like animations+skinning etc. will take some effort, but this should be manageable as well) |
@javagl do you think this issue should move to https://github.com/KhronosGroup/glTF-Tutorials as a roadmap for a tutorial on runtime implementation options? |
@pjcozzi @javagl Should we close it here and open a new one in glTF-Tutorials repo? |
OK to close here and open elsewhere. Maybe I could try to extract/summarize what has been discussed here to far. But in any case, the actual tutorial would have to be written by someone who is more familiar with JavaScript. |
@javagl could you close this and submit a summary of this to the glTF-Tutorials repo? |
Creating a reminder for me (and closing it here, to increase the pressure ;-)). I'd first try to write the summary, maybe even just as a gist, so others can review it before it becomes a section in the main tutorial. |
I have summarized the discussion here in a gist: https://gist.github.com/javagl/bfde5cfab4240843120ed6eb38f4af87 I'm not entirely sure how to best proceed from here, but opened KhronosGroup/glTF-Tutorials#24 - maybe we can sort this out there. |
Fantastic, thank you @javagl! |
This is, again, not a real "issue of glTF", and not a real question. But after reading and writing some code related to glTF loaders and viewers in different programming languages, I think that there are different possible strategies for implementors regarding asynchronicity. For example, an implementor could...
buffer
,image
,shader
), and when they are all loaded, send the whole glTF asset to the rendererThese are roughly sorted by how desirable they are, from least to most. And this obviously coincides with the implementation effort...
I think that most people would agree that blocking during a rendering pass should be avoided (with additional constraints, e.g. in JavaScript, where certain tasks have to be fine-grained to not block the browser, or can only be done asynchronously anyhow).
Usually, the initialization of GL data structures has to happen on the rendering thread (or in a "rendering call") and thus, the question about laziness here is driven by the laziness during loading: The initialization of a GL texture has to assume that the
image
data is already loaded - otherwise, it will have to be deferred to a later rendering pass.In a sophisticated viewer, people might even expect that it will render as much as it can, at any point in time. This means that the
meshPrimitive
objects should pop up one by one, as their required data is loaded (and maybe even displayed with a default texture until their texture is loaded completely).So it would be desirable to have fine-grained asynchronous loading. With "fine grained", I mean that each
buffer
,image
andshader
may be loaded asynchonously and individually (even though this is hardly applicable for embedded- or binary glTF). But this may require a considerable infrastructure to be set up: The renderer has to collect information from various sources. For example, when rendering ameshPrimitive
, there may be three types of elements that may involve a lookup into some lazy-loading-infrastructure:and each of them may cause the actual rendering to be deferred to the next pass if the required data still has to be loaded. (This can already be fiddly - even when not considering any error handling...).
One of my initial thoughts here was whether it would be legitimate to load all
buffer
,image
andshader
objects based on the contents of the corresponding top-level dictionaries, so that later - at the point where GL initialization and rendering should be done - one can be sure that all data was already loaded. This would make things much simpler, of course (although it raised the question of whether one could assume that a glTF asset is minimal, in the sense that it does not contain references to unusedimage
objects, for example). Taken one step further, one could even create the corresponding GL objects (buffers, programs and texture IDs) during an initialization, before they are actually required.Are there any general thoughts or recommendations about all this?
The text was updated successfully, but these errors were encountered: