I thought you were referring to additional assets on the terrain when you use term mesh, but it seems you actually refer to a terrain geometry. Does chunked LOD still need full mesh for the chunk (if you are using TIN, all three coordinates for each vertex are required). I think this is the first point of misunderstanding. Do you understand correctly the structure of the chunk (precomputed 3D mesh as a basis, additional height field for some additional displacement, index buffer)? I don't need the mesh. Actually I don't even need a grid. Everything is procedurally generated in the vertex shader. My approach does not require even a single attribute for the terrain to be generated. Everything is set by uniforms and textures. Quite different approach.
You mean techniques like implicit grid computed from gl_VertexID? Looking up height in textures based on computed coordinates. Yeah that's pretty common.
I do not know how chunked LOD does it, we certainly do not have a separate base mesh. But we do not have all the terrain vertex shaders computing positions algorithmically in all classes of detail levels - for performance reasons and for reasons outlined below. There are actually 3 classes/approaches used depending on the quadtree level.
I suppose your implementation differs from Ulrich's/Kevin's models. Can you procedurally generate a chunk without need to have a full 3D mesh?
Full 3D mesh of what? If you mean whether there is some base spherical patch mesh on top of which we'd modulate heights, then not at all. OT never used TINs, vertex shader can generate everything from a heightfield texture and positional info of the tile/node on the sphere. However, we currently create some intermediate gridded representations that are cached, because heightfield lookups aren't the only thing that make up the terrain in OT. Heightfields come from several sources (elevation dataset, procedurally refined heights, generated heights), and they can be also modified by dynamic vector overlays (things like craters are modifiers).
How the textures are stored and referenced by the chunks?
...
... you have texture arrays and buffers (filled with chunks' data and indices). The Limiter manages those resources by replacing unneeded data with newly fetched/generated. Probably textures for the same LOD are single layer in the texture array. Is it a correct assumption? Also, probably several chunks share the same VBO. Otherwise there would be a huge number of buffers.
All chunks can be in a single VBO, and all textures can be in a single texture array (all LODs in the quadtree have the same sizes). It's not exactly that way because there are ~3 separate ranges of quadtree levels, and data storage is optimized for each. VBOs sometimes contain just heights, at lower levels also things like horizontal displacements and some other attributes.
However, it can be also split into separate buffers and textures, and we support that too. Interestingly, there's practically no difference in performance between the two currently, but we expect we will be able to optimize the former mode better, once we get rid of some constraints of GL3.3 cards that we still support.
When I was speaking about a single resource, I meant I could have a single texture array for a single partition. In fact I have two, because I'm separating elevations from the coverage (different data-types with different packing), but generally the number of resources is very small. And, oh, yes, I don't have even i single VBO since I don't have any attributes. In clipmaps' approach there is no meshes, just pure textures. And I don't even have a grid to extrude the heights from it. Everything is procedurally generated.
Yeah, that's only logical nowadays. Although we refer to that not as
procedural but
algorithmic, because we are using procedural for stuff that generates extra detail from existing data using fractal-based algorithms. Just in case the "procedural" expression I used above made some confusion
If I understand it correctly, when you have to lookup into distant parts of the texture where samples are apart - it's not going to be very cache friendly if there wasn't also a "mipmapped" representation. I wonder how much more memory clipmaps have to keep, compared to a LOD-fitting quadtree representation of the same thing for a single view on the ground level. But that will largely depend on the reachable quadtree depth and the world size ...
Today I have tried to make some movies to demonstrate my solution, but I failed.
Camtasia significantly disturb the frame rate and display on my built-in profiler. FRAPS is a little bit better, but it is also unusable to reliably reflect what's going on on the screen. What are you using from capturing the screen?
We have our own built-in video recorder there, but recently we also successfully tested
Nvidia ShadowPlay.