Outerra Engine > Technology

Several questions about the Engine

(1/4) > >>

Edding3000:
I've been reading the blog and the checking out the forum for about half a year now (maybe some more) and it's finally time to put down some questions about the internal working:

First: The dataset.
How did you get the earth dataset with this resolution and is there a chance i can get a grip of it too? I'm doing some planetary stuff to, but at the moment it is generated 100% procedurally. The results however on this are in my opinion not sufficient and i need someting of a real dataset. So effectivly: where is the dataset, and if you modified it: is there a chance i can get hold of it (i know it's huge).


Second: The terrain
How does the splitting works? I use a scheme like this:
create texture for vertices and readback data to cpu (for physics and mesh generation)
create high reso texture for normals then slope then material. (use material in PS when rendering)
read back normals -> create TBN matrix in the mesh.

Because you dont generate the patches the 'generate texture' should propbably be changed to stream in from HDD/internet? I gues you then too generate a texture (probably only for deep level patches) with fractals and combine it with the dataset?


Third: The atmosphere!
What algorithm/paper are you using on the atmosphere? I myself am using the o'neill gpu version but it just is NOT realistic enough in my opinion (and some bugs in there too!).


Fourth: Internal engine workings.
I'm using a scenegraph which has transforms, physics nodes and some other stuff. Culling is done with a seperate graph(a octree to be precise!). Materials are just given to a mesh, and when i traverse the tree it gives back the 'renderstate' to render each object/mesh/whatever! Then separate alpha and no alpha. Finally sort based on material and device state as fast/good as possible. and finally render. I wonder if you use a similar technique.

Fifth: Shadows.
What shadow algorithm do you use at the moment? CSM's i guess?



Congrats on the great engine and i will keep checking the blog everyday for updates :P!

cameni:
Hi,

First: The dataset
There are plenty of datasets available, for a list of global ones look here http://www.vterrain.org/Elevation/global.html
We are using remapped data, adapted for our architecture.

Second: The terrain
This is slightly more complex. Coarse LOD terrain tiles are created purely from elevation data, finer levels use elevation data as the basis, refining it further with fractal algorithms. Either way, the textures are generated after these HR tile data are loaded/refined/generated, and after that the mesh is r2vbo-ed so it's not actually read to CPU. Then only certain tile levels are being read back to CPU for collision purposes.

Third: The atmosphere
We started off of the O'Neil's paper too, but I rewrote the code several times to better understand what's going on in there and where the bugs come from, changing the scaling function and adding some tweaks to suppress some artifacts we were getting.
But I'm still not satisfied and I will have another take on it shortly.

Fourth: Internal engine workings
No scene graphs, that would be limiting us. We are using quadtree for terrain and everything else is just hooked into it at specific levels. Not treating objects and terrain universally either, but using specialized code for each, organizing it in batches during quadtree traversal so it can be dumped to GPU fast.

Fifth: Shadows
At the moment we are using a variation of this technique

Hope that helps :)

Edding3000:
Another question about the fractals. Do you tile fractals over the dataset? I guess you cant do it without tiling because otherwise precision will run out on the GPU. Or do you use CPU fractal generation?


'No scene graphs, that would be limiting us. We are using quadtree for terrain and everything else is just hooked into it at specific levels. Not treating objects and terrain universally either, but using specialized code for each, organizing it in batches during quadtree traversal so it can be dumped to GPU fast.'

Question about that also: There must be some kind of hirarchy tree to ensure that objects that are dependent on another object are 'moved' (rotated, scaled, etc) together?

cameni:

--- Quote from: Edding3000 ---Another question about the fractals. Do you tile fractals over the dataset? I guess you cant do it without tiling because otherwise precision will run out on the GPU. Or do you use CPU fractal generation?
--- End quote ---
Nope, the fractals aren't tiled, every place generates a unique pattern. Why would the precision run out?
Fractals run solely on GPU.


--- Quote ---There must be some kind of hirarchy tree to ensure that objects that are dependent on another object are 'moved' (rotated, scaled, etc) together?
--- End quote ---
Such hierarchy tree is used only within objects, but objects are referenced in world-space (or local-space) coordinates.

Edding3000:

--- Quote from: cameni ---
--- Quote from: Edding3000 ---Another question about the fractals. Do you tile fractals over the dataset? I guess you cant do it without tiling because otherwise precision will run out on the GPU. Or do you use CPU fractal generation?
--- End quote ---
Nope, the fractals aren't tiled, every place generates a unique pattern. Why would the precision run out?
Fractals run solely on GPU.
--- End quote ---
It will run out due to 32bit precision. 0,938182345394 for example will NOT fit in a fp32. For non tiling you must specify a range of some kind:

patch LOD 0 = -1 / +1
patch LOD 1 = -1/0 and 0/1
etc: LOD 20 = 0,9959944 to 0,9959944

No more precision!

If you have the solution to this one, patent it first before sharing it, because in all planet renderers ive encountered so far this is a unsolved problem!

Of course it only starts apearring in very deep levels (18+ for normals and 20+ for height maps)...





--- Quote ---Such hierarchy tree is used only within objects, but objects are referenced in world-space (or local-space) coordinates.
--- End quote ---
Okay i understand. This must make it difficult for example to let a planet move? All objects then should be translated to match movement of the planet every frame?! that's crazy! :P... It must work in a different way. But i'm more interested in the way you render:

Is there like a 'common' pipeline which all geometry goes through, or do you use specialized rendering systems per 'object'? And how does the shadering/materials/effects work in your engine? UberShader, dynamic linking, dynamic generation?

Greetz.

Navigation

[0] Message Index

[#] Next page

Go to full version