Outerra forum

Please login or register.

Login with username, password and session length
Advanced search  

News:

Outerra Tech Demo download. Help with graphics driver issues

Pages: [1] 2

Author Topic: Several questions about the Engine  (Read 13123 times)

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« on: July 05, 2010, 12:00:56 pm »

I've been reading the blog and the checking out the forum for about half a year now (maybe some more) and it's finally time to put down some questions about the internal working:

First: The dataset.
How did you get the earth dataset with this resolution and is there a chance i can get a grip of it too? I'm doing some planetary stuff to, but at the moment it is generated 100% procedurally. The results however on this are in my opinion not sufficient and i need someting of a real dataset. So effectivly: where is the dataset, and if you modified it: is there a chance i can get hold of it (i know it's huge).


Second: The terrain
How does the splitting works? I use a scheme like this:
create texture for vertices and readback data to cpu (for physics and mesh generation)
create high reso texture for normals then slope then material. (use material in PS when rendering)
read back normals -> create TBN matrix in the mesh.

Because you dont generate the patches the 'generate texture' should propbably be changed to stream in from HDD/internet? I gues you then too generate a texture (probably only for deep level patches) with fractals and combine it with the dataset?


Third: The atmosphere!
What algorithm/paper are you using on the atmosphere? I myself am using the o'neill gpu version but it just is NOT realistic enough in my opinion (and some bugs in there too!).


Fourth: Internal engine workings.
I'm using a scenegraph which has transforms, physics nodes and some other stuff. Culling is done with a seperate graph(a octree to be precise!). Materials are just given to a mesh, and when i traverse the tree it gives back the 'renderstate' to render each object/mesh/whatever! Then separate alpha and no alpha. Finally sort based on material and device state as fast/good as possible. and finally render. I wonder if you use a similar technique.

Fifth: Shadows.
What shadow algorithm do you use at the moment? CSM's i guess?



Congrats on the great engine and i will keep checking the blog everyday for updates :P!
Logged

cameni

  • Brano Kemen
  • Outerra Administrator
  • Hero Member
  • *****
  • Posts: 6651
  • Pegs is clever, but tae hain’t a touch sentimental
    • outerra.com
Several questions about the Engine
« Reply #1 on: July 05, 2010, 01:20:11 pm »

Hi,

First: The dataset
There are plenty of datasets available, for a list of global ones look here http://www.vterrain.org/Elevation/global.html
We are using remapped data, adapted for our architecture.

Second: The terrain
This is slightly more complex. Coarse LOD terrain tiles are created purely from elevation data, finer levels use elevation data as the basis, refining it further with fractal algorithms. Either way, the textures are generated after these HR tile data are loaded/refined/generated, and after that the mesh is r2vbo-ed so it's not actually read to CPU. Then only certain tile levels are being read back to CPU for collision purposes.

Third: The atmosphere
We started off of the O'Neil's paper too, but I rewrote the code several times to better understand what's going on in there and where the bugs come from, changing the scaling function and adding some tweaks to suppress some artifacts we were getting.
But I'm still not satisfied and I will have another take on it shortly.

Fourth: Internal engine workings
No scene graphs, that would be limiting us. We are using quadtree for terrain and everything else is just hooked into it at specific levels. Not treating objects and terrain universally either, but using specialized code for each, organizing it in batches during quadtree traversal so it can be dumped to GPU fast.

Fifth: Shadows
At the moment we are using a variation of this technique

Hope that helps :)
Logged

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« Reply #2 on: July 19, 2010, 11:23:49 am »

Another question about the fractals. Do you tile fractals over the dataset? I guess you cant do it without tiling because otherwise precision will run out on the GPU. Or do you use CPU fractal generation?


'No scene graphs, that would be limiting us. We are using quadtree for terrain and everything else is just hooked into it at specific levels. Not treating objects and terrain universally either, but using specialized code for each, organizing it in batches during quadtree traversal so it can be dumped to GPU fast.'

Question about that also: There must be some kind of hirarchy tree to ensure that objects that are dependent on another object are 'moved' (rotated, scaled, etc) together?
Logged

cameni

  • Brano Kemen
  • Outerra Administrator
  • Hero Member
  • *****
  • Posts: 6651
  • Pegs is clever, but tae hain’t a touch sentimental
    • outerra.com
Several questions about the Engine
« Reply #3 on: July 20, 2010, 02:49:13 am »

Quote from: Edding3000
Another question about the fractals. Do you tile fractals over the dataset? I guess you cant do it without tiling because otherwise precision will run out on the GPU. Or do you use CPU fractal generation?
Nope, the fractals aren't tiled, every place generates a unique pattern. Why would the precision run out?
Fractals run solely on GPU.

Quote
There must be some kind of hirarchy tree to ensure that objects that are dependent on another object are 'moved' (rotated, scaled, etc) together?
Such hierarchy tree is used only within objects, but objects are referenced in world-space (or local-space) coordinates.
Logged

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« Reply #4 on: July 21, 2010, 06:31:47 am »

Quote from: cameni
Quote from: Edding3000
Another question about the fractals. Do you tile fractals over the dataset? I guess you cant do it without tiling because otherwise precision will run out on the GPU. Or do you use CPU fractal generation?
Nope, the fractals aren't tiled, every place generates a unique pattern. Why would the precision run out?
Fractals run solely on GPU.
It will run out due to 32bit precision. 0,938182345394 for example will NOT fit in a fp32. For non tiling you must specify a range of some kind:

patch LOD 0 = -1 / +1
patch LOD 1 = -1/0 and 0/1
etc: LOD 20 = 0,9959944 to 0,9959944

No more precision!

If you have the solution to this one, patent it first before sharing it, because in all planet renderers ive encountered so far this is a unsolved problem!

Of course it only starts apearring in very deep levels (18+ for normals and 20+ for height maps)...




Quote
Such hierarchy tree is used only within objects, but objects are referenced in world-space (or local-space) coordinates.
Okay i understand. This must make it difficult for example to let a planet move? All objects then should be translated to match movement of the planet every frame?! that's crazy! :P... It must work in a different way. But i'm more interested in the way you render:

Is there like a 'common' pipeline which all geometry goes through, or do you use specialized rendering systems per 'object'? And how does the shadering/materials/effects work in your engine? UberShader, dynamic linking, dynamic generation?

Greetz.
Logged

cameni

  • Brano Kemen
  • Outerra Administrator
  • Hero Member
  • *****
  • Posts: 6651
  • Pegs is clever, but tae hain’t a touch sentimental
    • outerra.com
Several questions about the Engine
« Reply #5 on: July 21, 2010, 07:40:23 am »

Quote from: Edding3000
It will run out due to 32bit precision. 0,938182345394 for example will NOT fit in a fp32. For non tiling you must specify a range of some kind:

patch LOD 0 = -1 / +1
patch LOD 1 = -1/0 and 0/1
etc: LOD 20 = 0,9959944 to 0,9959944

No more precision!

If you have the solution to this one, patent it first before sharing it, because in all planet renderers ive encountered so far this is a unsolved problem!

Of course it only starts apearring in very deep levels (18+ for normals and 20+ for height maps)...
Now this genuinely interests me. I've noticed that all planet renderers have problems with precision, but apparently I'm in a different dimension because I don't have an idea why.

I also don't understand what you are saying. If I have a heightfield containing elevation in range, say ±16km, it will use 14 bits for whole meters, and with 24 bit mantissa 10 bits remain for fractional part until the resolution starts to deteriorate.
That means that for elevation higher than 8km or lower than -8km the maximum resolution is 1/1024m, <1mm
That's for the heights where the precision for terrain doesn't matter that much. On the sea level the resolution is many times better.

I guess there is some really significant difference in how all other planet renderers work. But I've never studied papers about planet rendering so I can't tell, just did it my way and simply had not encountered such problem.
But I'll be glad if you could explain it to me, I'm really wondering what's going on.

Quote
Okay i understand. This must make it difficult for example to let a planet move? All objects then should be translated to match movement of the planet every frame?! that's crazy! :P... It must work in a different way.
It uses ECEF coordinate frame so its the image of sun and stars that are moving. These are rendered specially anyway.

Quote
Is there like a 'common' pipeline which all geometry goes through, or do you use specialized rendering systems per 'object'? And how does the shadering/materials/effects work in your engine? UberShader, dynamic linking, dynamic generation?
Not a completely common pipeline - objects use their common one (with an ubershader, basically).
Terrain uses its own special pipeline, relatively tightly integrated with the fractal generator framework and related stuff. The terrain material system will be updated to handle climate and land class data, but the development won't go in the direction of one common pipeline - the terrain subsystem is so huge and specialized that it deserves it's own optimized system. I'd say it wouldn't work very effectively if it used a generic pipeline, and we'd be probably elsewhere now ..
Logged

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« Reply #6 on: July 21, 2010, 01:21:52 pm »

Quote from: cameni
Quote from: Edding3000
It will run out due to 32bit precision. 0,938182345394 for example will NOT fit in a fp32. For non tiling you must specify a range of some kind:

patch LOD 0 = -1 / +1
patch LOD 1 = -1/0 and 0/1
etc: LOD 20 = 0,9959944 to 0,9959944

No more precision!

If you have the solution to this one, patent it first before sharing it, because in all planet renderers ive encountered so far this is a unsolved problem!

Of course it only starts apearring in very deep levels (18+ for normals and 20+ for height maps)...
Now this genuinely interests me. I've noticed that all planet renderers have problems with precision, but apparently I'm in a different dimension because I don't have an idea why.

I also don't understand what you are saying. If I have a heightfield containing elevation in range, say ±16km, it will use 14 bits for whole meters, and with 24 bit mantissa 10 bits remain for fractional part until the resolution starts to deteriorate.
That means that for elevation higher than 8km or lower than -8km the maximum resolution is 1/1024m, <1mm
That's for the heights where the precision for terrain doesn't matter that much. On the sea level the resolution is many times better.

I guess there is some really significant difference in how all other planet renderers work. But I've never studied papers about planet rendering so I can't tell, just did it my way and simply had not encountered such problem.
But I'll be glad if you could explain it to me, I'm really wondering what's going on.
I'll try to explain it! When having a planet of earth size completely procedurally generated, you have a cube (of course) mapped to a sphere. Every side of this cube is: +/-40000 / 4 = 10.000 x 10.000 km. the 0,0 in local space lies in the center of each square to maximize precision.
Lod0 will be 10.000x10.000 km, have 'coordinates' ranging from -1 to 1. for example: lod level = 32 vertices.
Precision will be enough for this lod: you can divide the texcoord in 32, without running out of it!
Now imagine a veryy deep lod of let's say: 22! a patch of lod 22 in the right bottom corner of a square of the cube. the texCoords would be something like (made them up to demonstrate): 0,9877679 to 1. and here comes the problem: you can no longer divide the 32 vertices between 0.9877679 and 1, because you would need MORE numbers behind the comma = more bits = failure :(.
A solution i've come up with is the following: i will instead divide with the same fractal function to 70 meters resolution, and then 'fade in' another level of fractals generated on top of the 'base fractals'. this high resolution fractals will use another coordinate system: for example high reso fractals come in at level 18.
now every patch of level 18 that's created will get two coordinate systems: one for base fractal generation and one for fine detail generation: this fine coor. system wil use THAT patch as it's local system (from -1 to 1). because fractals can tile this wont be a problem!

The only problem here is: generation of tiles will take longer because not only base fractals must be generated but also a high resolution fractal must be generated on top.
I haven't implemented it yet for i'm now busy with finally writing an engine instead of crap code :P!




Quote
Quote
Okay i understand. This must make it difficult for example to let a planet move? All objects then should be translated to match movement of the planet every frame?! that's crazy! :P... It must work in a different way.
It uses ECEF coordinate frame so its the image of sun and stars that are moving. These are rendered specially anyway.
Ahhh i see. My renderer will use orbits around a sun, etc. Stars far away will be rendered to a skybox/dome of course.


Quote
Quote
Is there like a 'common' pipeline which all geometry goes through, or do you use specialized rendering systems per 'object'? And how does the shadering/materials/effects work in your engine? UberShader, dynamic linking, dynamic generation?
Not a completely common pipeline - objects use their common one (with an ubershader, basically).
Terrain uses its own special pipeline, relatively tightly integrated with the fractal generator framework and related stuff. The terrain material system will be updated to handle climate and land class data, but the development won't go in the direction of one common pipeline - the terrain subsystem is so huge and specialized that it deserves it's own optimized system. I'd say it wouldn't work very effectively if it used a generic pipeline, and we'd be probably elsewhere now ..
I understand. It's loosely how i'm integrating shaders myself too. Base effects for common stuff and specialized custom shaders for terrain, atmosphere, skybox, etc.




I hope you understand the part about the precision issues. if something is unclear i will try to explain it in more detail!
Logged

cameni

  • Brano Kemen
  • Outerra Administrator
  • Hero Member
  • *****
  • Posts: 6651
  • Pegs is clever, but tae hain’t a touch sentimental
    • outerra.com
Several questions about the Engine
« Reply #7 on: July 21, 2010, 01:44:32 pm »

Quote from: Edding3000
I'll try to explain it! When having a planet of earth size completely procedurally generated, you have a cube (of course) mapped to a sphere. Every side of this cube is: +/-40000 / 4 = 10.000 x 10.000 km. the 0,0 in local space lies in the center of each square to maximize precision.
Lod0 will be 10.000x10.000 km, have 'coordinates' ranging from -1 to 1. for example: lod level = 32 vertices.
Precision will be enough for this lod: you can divide the texcoord in 32, without running out of it!
Now imagine a veryy deep lod of let's say: 22! a patch of lod 22 in the right bottom corner of a square of the cube. the texCoords would be something like (made them up to demonstrate): 0,9877679 to 1. and here comes the problem: you can no longer divide the 32 vertices between 0.9877679 and 1, because you would need MORE numbers behind the comma = more bits = failure :(.
And are you feeding these coordinates to 3D Perlin noise function? Does this gamedev thread describe the problem?
Logged

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« Reply #8 on: July 21, 2010, 04:01:28 pm »

This indeed is the problem. fp32 precision runs out, as you move deeper.
So my solution is: when at a certain level, give tiles a 2d coordinate system for high frequency noise.
Because there are several places that use the new coordinate system you have to tile or mirror the noise.
Example:
tile1 coordinates -1 to 1, tile2 -1 to 1
or
tile1 -1 to 1, tile2 1 to -1.

Because now there is new precision this must work (and as another plus you can use the base terrain to add certain kinds of high frequency noise). for example if it's rocky, more bumpy than if grass.

There are 2 downsides in this:
it makes it complicated, instead of generating basenoise, now you must extract the basenoise from the last depth tile that gives satisfactionary results and add on top the high frequency.
The second one is that you have to 'fade in' the noise to prevent cracks!

Still i dont understand why outerra has no precision issues.
Is it because you already 'tile' the noise? lets say: every 40 miles are -1 to 1 coordinates or something?

I understand that it cant be compared because outerra uses a dataset, and i'm using fractals to create everything. Still i don't get it!

Hope you can explain it to me!
Logged

cameni

  • Brano Kemen
  • Outerra Administrator
  • Hero Member
  • *****
  • Posts: 6651
  • Pegs is clever, but tae hain’t a touch sentimental
    • outerra.com
Several questions about the Engine
« Reply #9 on: July 21, 2010, 05:06:47 pm »

Well, Outerra doesn't have the precision issues because it doesn't use Perlin. Simple as that :)

Perlin function will generate a value directly from a coordinate vector, without needing or using the local topology. But that's practically unusable when you want to seed the world using a dataset, and want the fractal to respect and reflect the underlying terrain. Not to mention that Perlin doesn't generate a realistic terrain, it's always distinctively .. perlin-ish.

We are using a kind of wavelet noise, where the algorithm evaluates terrain parameters (slope, elevation, curvature) at each point to determine the amplitude of random value to accumulate. The random value is computed using integer face coordinates so it can use 32 bit face coordinates - plenty of resolution.

With Perlin, tiling won't work very well as it would generate the same features over and over, and that pattern would be surely visible even when overlaid on a heightfield.

When we will want to generate some planets using pure fractals, we'll mimic the process we're already using - one algorithm will produce the continents, mountain ridges in various erosion states, volcanoes etc in a coarse resolution, and the existing fractal refinement algorithm will take it down to the ground level. This way the planets will be realistic looking while still completely random.
Logged

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« Reply #10 on: July 22, 2010, 11:46:36 am »

Quote from: cameni
Well, Outerra doesn't have the precision issues because it doesn't use Perlin. Simple as that :)

Perlin function will generate a value directly from a coordinate vector, without needing or using the local topology. But that's practically unusable when you want to seed the world using a dataset, and want the fractal to respect and reflect the underlying terrain. Not to mention that Perlin doesn't generate a realistic terrain, it's always distinctively .. perlin-ish.

We are using a kind of wavelet noise, where the algorithm evaluates terrain parameters (slope, elevation, curvature) at each point to determine the amplitude of random value to accumulate. The random value is computed using integer face coordinates so it can use 32 bit face coordinates - plenty of resolution.

With Perlin, tiling won't work very well as it would generate the same features over and over, and that pattern would be surely visible even when overlaid on a heightfield.

When we will want to generate some planets using pure fractals, we'll mimic the process we're already using - one algorithm will produce the continents, mountain ridges in various erosion states, volcanoes etc in a coarse resolution, and the existing fractal refinement algorithm will take it down to the ground level. This way the planets will be realistic looking while still completely random.
I've looked into wavelet and i'm going to give it a try also (although it looks a lot like perlin?). I suppose you use combinations like fBm, turbulence, ridged etc? but this must give the same perlinish noise i guess?

Furthermore one last question: youre talking about slope and curvature.
I'm very interested in what the difference is?
Slope is local and curvature over a larger area?

Greetz.
Logged

cameni

  • Brano Kemen
  • Outerra Administrator
  • Hero Member
  • *****
  • Posts: 6651
  • Pegs is clever, but tae hain’t a touch sentimental
    • outerra.com
Several questions about the Engine
« Reply #11 on: July 23, 2010, 03:02:08 am »

Quote from: Edding3000
I've looked into wavelet and i'm going to give it a try also (although it looks a lot like perlin?). I suppose you use combinations like fBm, turbulence, ridged etc? but this must give the same perlinish noise i guess?
Looking at what you could have found about wavelet noise - Cook & DeRose?
Yes, this will give similar results, it's meant to remove some artifacts of Perlin noise.

But a more important point is that we are using integer coordinates to address points on the face, 32 bits give ~2mm resolution.
Looking briefly at Perlin, I'd think you could use it too.
But otherwise, we are accumulating the noise differently and seeding it also from the dataset, so we could go even deeper without tiling. Hard to explain now, but I'd like to write a document about the algorithm once the major stuff is finished and when there's hopefully more time.

Quote
Furthermore one last question: youre talking about slope and curvature.
I'm very interested in what the difference is?
Slope is local and curvature over a larger area?
Nope, slope is .. slope, the first derivative, ranging from flat to steep. Whereas the curvature corresponds to the second derivative, having positive values for hills and negative values for valleys, zero for flat terrain.
Logged

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« Reply #12 on: August 03, 2010, 03:37:42 pm »

Another question!!

What kind of shader system are you using?
Im developing a system in which basic materials can be made using the material editor by giving parameters, and a custom shader creater which uses a tree-like system. (Yeah just like c4 engine.)

Very curious what kind of system Outerra will be using to create a generic interface for meshes!

Greetz!
Logged

cameni

  • Brano Kemen
  • Outerra Administrator
  • Hero Member
  • *****
  • Posts: 6651
  • Pegs is clever, but tae hain’t a touch sentimental
    • outerra.com
Several questions about the Engine
« Reply #13 on: August 04, 2010, 04:19:10 am »

For objects it will be basically an ubershader, somewhat simplified as it's not covering the terrain.
Then later complicating it again to be able to use a procedural approach for materials.
Logged

Edding3000

  • Member
  • **
  • Posts: 93
Several questions about the Engine
« Reply #14 on: August 10, 2010, 07:18:27 pm »

Quote from: cameni
For objects it will be basically an ubershader, somewhat simplified as it's not covering the terrain.
Then later complicating it again to be able to use a procedural approach for materials.
I see why the terrain doesent use the ubershader, it's just to complicated.
But to hear that there is some sort of unified material interface to render models, is a great plus.

Have you ever thought about using a deferred renderer? I'm thinking of supporting it myself, but due to the transparancy of the atmosphere i'm affraid it's very difficult to implement, because guess colors will slightly differ between the two renderers.

Although what i read and tested (a bit) there seem to be a lot of pro's to use a deferred renderer.

Furthermore: Terrain self shadowing!
I know it isnt implemented yet but i read a post about 'a different algorithm'. Will you use a completely different algorithm to do terrain self shadowing and use another (the one that is implemented) for casting shadows on the planet/other objects?
I can't see how it will work though, because terrain will cast shadows on objects too?
Although it seems completely logical to use another algorithm though: objects like houses and other stuff wont need to cast shadows when there like 100miles away, terrain however must do this or else it will look strange!

I'm very interested in your ideas about this.
Logged
Pages: [1] 2