An experimental resampling method was tested on the existing 90m dataset, instead of the standard bilinear resampling.
The problem with bilinear resampling is that it loses high frequency detail when the samples lie in between the source points, and are averaged from 4 samples. It leads to blurring of such data, as can be easily seen when blending texture with itself to make it seamless (
source):
In OT the blurring causes a loss of detail at the specific wavelength of the data, which in case of current dataset is around 75m. Detail above it is preserved in the data, below it (finer) is procedurally generated, but right at 75m it's artificially suppressed by the bilinear filter.
In order to get some detail back, I tried to compensate the blurring by adding some fractal noise, proportional to the distance from source points. In other words, if the sample falls close to one of the source samples, it's taken almost directly, but if it lies further from each source point, a small noise value is added, that's proportional to the difference between source samples and largest in the middle.
Here are some screens for comparison:
It does some interesting things, and it also affects coarser detail levels.
The dataset is a bit larger (around +1.5GB) since the terrain is effectively noisier and therefore harder to compress.