Raytracing a huge bunch of triangles the naive way (ie. testing each ray against every single triangle) would be humongously slow, which is why raytracers use optimizations to greatly reduce the number of ray-triangle tests that needs to be done. The most common optimization is to organize the triangles into a tree-like data structure, such as a kd-tree.
How does Nvidia's RTX technology do this internally? It's my understanding (please correct me if I'm wrong) that it, indeed, also uses a tree data structure for the triangles to be raytraced. My question is which part of the rendering pipeline performs this construction of the tree data structure. Is it done on the application side?
But, more importantly, and this is the crux of my question: How are mesh deformations handled? (In other words, triangles that change shape and/or their relative position to each other within the same mesh object.)
It is my understanding that a tree data structure works wonders when raytracing a triangle mesh... as long as the mesh is completely rigid and doesn't deform. If the mesh deforms (triangles change shape and/or their position in relation to each other) the entire tree needs to be rebuilt (because the changed triangles can well end up in completely different locations in the tree).
RTX demos, and especially BF5, show animated human characters (with eg. their clothes being deformed by wind) being shown in raytraced reflections just fine. How is this achieved? Are the tree data structures recalculated on every single frame, or is this done some other way?