Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Direct X Raytracing: Further Unification (darkcephas.blogspot.com)
48 points by petermcneeley on March 21, 2018 | hide | past | favorite | 21 comments



Which is only supported on AMD cards.

Given that NVidia has their own API (RTX) , which will be part of Gameworks [0], is collaborating with Microsoft, and is currently requesting papers for Ray Tracing Gems[1], I don't see them bothering with Vulkan support.

[0] - https://blogs.nvidia.com/blog/2018/03/19/whats-difference-be...

[1] - https://news.developer.nvidia.com/call-for-papers-real-time-...


Hmm... The video claims that it is open source and supports non-AMD hardware.

https://www.youtube.com/watch?v=C9eMciZGN4I&t=1m7s


> Access to real-time ray-tracing will be made available to AMD partners by contacting your AMD representative.

How does that combine with:

> Like many of the other open source tools available from GPUOpen, real-time ray tracing will give users more control over the development process unlocking even more performance from your GPU.

Or they mean access before official release?


The rasterized image they have on that announcement looks like it was rendered with OpenGL 1.2 or something. In reality, today's rasterized image would look almost identical to their raytraced image.



The black box criticism is fair but to be expected. Early 3D APIs were black boxes, but that had the advantage of allowing IHVs to experiment with different implementations behind the scenes.

Now that things have largely settled down, moving to explicit APIs like Vulkan makes sense.

With hardware acceleration for ray tracing we're only just at the beginning of the development, so having fairly black box APIs makes sense again. Expect that to change eventually once the industry has settled on answers to many of the currently open questions, but it might take a while.


As mentioned in this the linked article LODing is a big unknown. However the there are three aspects to the API that can be regarded as aiding LOD tracing. First trace maximum distances can be specified. Second miss shaders (those rays exceeding maximums) can generate more rays. And finally intersection shaders can be provided for non-polygonal LOD techniques (Voxels).


can this raytracing pipeline be mixed with rasterization pipeline?


Yes. In fact the starting point for the raytracing is the data resulting from rasterizing normal camera view. This technique will allow the expensive raytracing to be used sparingly.


What's amusing to me is how closely this resembles the path that some of us went down in film rendering.

If that history is any guide, the hybrid approach here is likely to be a short-lived stop-gap before moving on to full ray tracing for everything. It quickly becomes annoying have to maintain the code paths and the data structures for both rasterization and ray tracing. You have to keep the two carefully in sync (e.g., tessellating to the same level, using the same LoD levels, displacements the same) or you'll get weird artifacts. Even then, numeric precision can still bite you; surface acne and ray trace bias issues are worse than usual when the launch points for your rays are computed by an entirely different rendering technique.

Things get a lot easier when you have just the one codepath and set of data structures. Not to mention that ray tracing for primary visibility tends to be fairly cheap since that's when the rays are the most coherent. A lot of the ray tracing research back in the aughts dealt with efficient acceleration structure traversal and intersection for tightly coherent bundles of rays. People have been ray casting for a long time. I think the bigger deal is that it's only in the past decade where we finally have enough compute power to really start to handle the less coherent secondary rays at real-time rates.


The trend is definitely set. However even AAA gaming is more accepting of minor artifacts over any dips in framerate below 30fps. We might see biased non-rasterized realtime rendering before we the type of unbiased rendering of offline happening in games.


"Short" being a relative term though. Your points make sense but there's simply no way raytracing can be as performant as current rasterization techniques in geometry-heavy scenes at a same-or-better quality level, resolution, and framerate until GPUs get substantially more powerful.


Thus is entirely the wrong way around: raytracing scales less than linearly with the scene complexity while rasterization is mosyly linear in the number of visible primitives. It is not a simple complexity function in either case, though.


> raytracing scales less than linearly with the scene complexity

Not really true, is it? More like O(log n), or linearly to the number of pixels/samples.


I meant that in the sense that O(log n) < O(n). But when comparing raytracing to rasterization you have the problem that the complexity depends mostly on different variables for each algorithm.


On the other hand, if we assume that every pixel is covered by a constant number of triangles and a typical scene doesn't push more triangles than there are pixels, rasterization suddenly looks a lot more like O(pixels) or O(1) per pixel while ray tracing is still O(log triangles) per pixel accounting only for direct rays.

Then again, it's the constant factors that really matter. A couple arithmetic operations plus a single spatially coherent memory fetch (depth test) and a single write to the frame- or g-buffer vs. 10+ fetches for the kd/bvh tree traversal followed by several full ray vs. tri tests (each one taking at least an order of magnitude more arithmetic ops than the sign-based test in rasterization).

I guess when we are not yet able to use Voxel Cone Tracing in AAA games, will the temporal filter or RNN based denoising really be enough to make ray traced GI worth the cost?


[This became quite a rambling post, but I'm too lazy to shorten it. Sorry.]

For a proper response to this I'd need to dig up the literature that analyzed the complexity in detail. I haven't had a reason to look at that yet. In practice, we do not care that much for theoretical complexity of algorithms. It does not tell you anything really useful. For certain problems, a grid beats a BVH, and for others, a BVH beats a grid. Sometimes, switching the heuristic used for BVH construction makes or breaks performance. Sometimes, rasterization performs worse than ray tracing.

Voxel cone tracing is at its core a volume rendering technique. It requires a brute force sampling of the generated reflectance volume at each grid cell along the ray. The dynamic generation of the volumetric data is a three-dimensional rasterization step that is not cheap. And the output is only really good for surfaces with a certain amount of glossiness. I was a bit surprised that Epic axed it from UE4 before release (implementing it takes a lot of work!), but I think in the end the combined results from reflection mapping and screen space reflections was of a similar quality. It's a shame, though, that Cyril Crassin's research work went essentially unused.

This is fundamentally different from path tracing with BVH traversal. A lot of manpower and money has been sunk into the later problem in the last couple of decades. Ray intersection kernels like OptiX use every trick in the book to run fast on the hardware they are designed for - and they are really, really fast when you consider what they have to work with. Unfortunately, the hardware manufacturers are hell bent on keeping a lot of their tricks secret.

Wavelet filter based denoising really takes the rendered input down to about 1 or 2 paths per pixel. I have had that demonstrated to me in real time on quite complex scenes (one was San Miguel) - on currently available commodity hardware, too. Otherwise I wouldn't believe it. These filters make realtime path tracing work.


Path tracing a really complex static scene in realtime has become quite feasible recently, especially with all these fancy temporal denoising filters that basically turn purse noise into surprisingly high quality images in no time.

The catch is the word "static". Dynamic scenes suffer from the overhead of building acceleration structures. You can hack something together if you have a tiny bit of dynamic geometry in an otherwise static scene or your animation is just moving static objects around. But a raytraced animated human face in realtime is something that we likely won't see for a while still.


From an earlier blog post https://blogs.msdn.microsoft.com/directx/2018/03/19/announci...

DXR will initially be used to supplement current rendering techniques such as screen space reflections, for example, to fill in data from geometry that’s either occluded or off-screen. This will lead to a material increase in visual quality for these effects in the near future. Over the next several years, however, we expect an increase in utilization of DXR for techniques that are simply impractical for rasterization, such as true global illumination.


Yup! AMD did a presentation on doing exactly this at GDC this morning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: