Hi, I just started working on a globe rendering project like this, and I have been reading some of the posts on the tech you use for Outerra.
My specific question is regarding the way you handle meshes. From what I read, Outerra does not use a base mesh, but generates a mesh depending on position and view direction. I had not considered this approach, but had a couple questions.
Was this done to save vertex processing of a base mesh? My strategy so far has been to create a base cube mesh, and the vertex shader does a normalization in order to transform the cube into a sphere. When I get close enough, the tessellation shaders start refining the mesh.
My thought was that clipping and culling would eliminate any vertices which didn't need to be transformed in the shader stages. Is this an incorrect assumption?
Secondly, I'm using <i,j,k> for the cube where, the ranges are all [-1, 1]. In order to scale the sphere I just multiply the normalized vector by a radius. I noticed your k value is slightly higher than 1, which I believe you said was to make for more regular quad shapes. Are you doing this so you can texture a roughly square region for each quad on the sphere's surface? See my photo: Would the region in black be covered by it's own mip map, so that as you get closer the resolution of the texture would increase and make for a finer terrain detail?