Terrain Former - Mould Tool Conundrum

Introduction

For my Unity plugin, Terrain Former, I am trying to implement a mould tool which moulds the terrain around geometry, such as roads and buildings.

The test scene.

The Problem

One way to accomplish this would be to use Physics.Raycast to raycast from the bottom of the terrain to the top. If the raycast hits nothing, then the terrain segment will remain at its original position. This would be done for every terrain segment. This approach works but can take up to 300 ms to just perform this step, which is unacceptable because I want this to update in realtime.

Creating a depth map (or a heightfield to some extent) by raycasting from the bottom of the terrain up to the scene geometry.

Another important aspect of the mould tool is that the user should be able to define how far the moulding extends from the edges of geometry. I have no good idea of how to accomplish this in a performant manner with C# code alone.

The scene geometry extended outwards.

The next step is to blur the result which can make the entire process take over a second. Again, I want this to be able to update in realtime!

The scene geometry blurred to reduce the severity of the cliffs.

This feature must work in the editor on Windows and Mac, so as far as I can tell Compute Shaders are out of the question (for now at least).

To summarise, the feature requires the following steps:


The intended final result.

The Attempted Solution

I have some experience with using command buffers, so I went ahead and started a quasi-GPU-based implementation of this.

Something very nice about using shaders for this is that I could just use the depth texture rendered by the GPU rather than raycasting from the bottom, upwards. Simply positioning the camera from the bottom of the terrain facing upwards, setting the near clip plane to 0, and the far clip to the height of the terrain will make the depth texture perfectly normalized.

The result is exactly what I want and it requires no actual code on my side. Okay, the next task is to extend out the geometry. A replacement shader could be used to extend out mesh vertices.

But when I am using a replacement shader, the command buffer is never being called. This was confirmed after removing the replacement texture, where the command buffer began to execute like it should.

Okay, so I can't use a replacement shader, but I could try individually drawing every mesh using CommandBuffer.DrawMesh or CommandBuffer.DrawRenderer and tell it to use our own material/shader. This works; however, now I don't have a free depth texture (unless I'm mistaken), so now I need to calculate the depth myself by returning the depth value as a colour (using the red colour channel) in the shader I'm using.

But that doesn't seem to work. When I issue CommandBuffer.DrawMesh or CommandBuffer.DrawRenderer with a specific material to return a certain colour, it doesn't work as expected. Instead it returns an almost 40%-darker shade, as if lighting is playing a part even though the material I am drawing the mesh with has a shader that returns a 100% white colour (for debugging):

fixed4 frag(v2f i) : SV_Target {
   return fixed4(1, 1, 1, 1);
}

Final Thoughts

I ran into more roadblocks along the way, and to keep the post short I've left them out. It's pretty easy to see that every way I try to implement this feature, there's something that stops it from working. I'm guessing there's something simple I could do, but I need some guidance and asking on the forums yields zero replies (it is not a trivial subject after all).

Last Modified: July 29, 2016.