Contextual Ambient OcclusionDownload PDF

19 Dec 2022 (modified: 05 May 2023)GI 2023Readers: Everyone
Keywords: Ambient occlusion, volume rendering, global illumination, medical imaging, shadowing, transfer function, shading
TL;DR: In this paper, we present a new volumetric ambient occlusion algorithm called Contextual Ambient Occlusion (CAO) that supports real-time clipping of the volume.
Abstract: In this paper, we present a new volumetric ambient occlusion algorithm called Contextual Ambient Occlusion (CAO) that supports real-time clipping. The algorithm produces ambient occlusion images of exactly the same quality as Local Ambient Occlusion (LAO) while enabling real-time modification to the shape used to clip the volume. The main idea of the algorithm is that clipping only affects the ambient value of a small number of voxels, so by identifying these voxels and recalculating the ambient factor only for them, it is possible to significantly increase the rendering performance (by 2-5x) without decreasing the quality of the rendered image. Due to its fast performance, the algorithm is suitable for interactive environments where clipping changes could occur every frame. Additionally, the algorithm doesn’t have any stereoscopic inconsistency, which makes it suitable for mixed reality environments.
Track: Graphics
Summary Of Changes: Note that the text in the quotes is only an abbreviation of the changes that were made to address each comment. - The meta-reviewer and reviewer 1 mentioned that the reasoning behind using two depth maps should be explained. We added the following text in section 3.2: "The advantage of this algorithm is that it makes it very simple to test for any point inside the volume if it is clipped, by only requiring to sample two depth maps." And later in section 3.3 we added: "In the actual implementation, this Minskowski sum doesn’t need to be explicitly performed and the dilation shape doesn’t need to be created in the form of a mesh because doing so is computationally expensive, and a significantly simpler approach could be taken instead." This refers to the calculation that can be done using two depth maps. - The meta-reviewer as well as reviewers 1 and 2 noted that we should demonstrate that LAO and CAO produce similar results by having a side-by-side comparison. We realized that the original version of the paper didn’t make it clear that CAO is not an approximation, it is numerically equivalent to LAO. The main contribution of the method is to avoid recomputing the ambient occlusion factor of voxels that are not affected by clipping. We have changed the wording in the abstract to highlight that: “The algorithm produces ambient occlusion images of exactly the same quality as Local Ambient Occlusion (LAO) while enabling real-time modification to the shape used to clip the volume.” In order to test that our implementation is correct, we compared the screenshots produced by the LAO and CAO algorithms for the 3 scenes, and we added the following paragraph in section 4: "Finally, knowing that the algorithm produces exactly the same image as LAO, to make sure that our implementation is correct, we compared the screenshots produced by the LAO and CAO algorithms for the 3 scenes. [...] We have found that every pair of pixels had the exact same RGB components, confirming the correctness of our implementation. [...]" Since both algorithms produce exactly the same image, we thought that making a side-by-side comparison was unnecessary since it would simply demonstrate the same screenshot twice. - The meta-reviewer and the reviewer 2 mentioned that the temporal stability of the algorithm should be discussed. Thus, in the section 5 we added: "The CAO algorithm is able to recalculate in real-time the changes to occlusion that occur in the volume when parts of this volume are clipped, while achieving exactly the same quality as LAO. Because the algorithm simply prevents unaffected voxels from being recalculated, the calculation is always exact and remains stable over successive frames." - The meta-reviewer and reviewer 3 asked us to perform a timing breakdown of the new technique. To do this, we have re-run the benchmark for all the scenes while measuring the time required to perform all the runtime steps of the algorithm described in section 5: clipping (step 4), spherical ray cast (step 5) and rendering (step 6). We have created a graph for each scene (Figure 9) that demonstrates the proportion of each step compared to the full frame calculation. In section 4, we added: "Third, we determined the computation cost breakdown when calculating a single frame of each of the 3 scenes using the LAO and CAO algorithms, with 6 and 54 rays, as can bee seen in Fig. 9. The graphs demonstrate that in terms of the amount of calculations, the longest step to perform along those described in Sect. 3.5 is step 5, where the spherical ray cast is performed. Further, with increasing dimensions of the volume that is rendered, the proportion of the step 5 computations becomes even larger, overshadowing steps 4 and 6 in terms of calculation time." Additionally, in section 5 we added a sentence that explaining why the speedup is higher for volumes with a higher 3D resolution and with a higher number of rays: [...] This is expected because in those cases the total amount of computation necessary for the spherical ray cast becomes larger, so cutting down the amount of computation at this step yields the best speedup, as can be seen in Fig. 9.” - Reviewer 1 suggested that we discuss other types of Boolean operations that can be supported by the algorithm other than clipping, such as intersection and union. To address this, in section 3.2 we added a passage on using the intersection operator instead of a subtraction one: “However, other types of logical operations can be supported too. The algorithm could be adapted to work with volume probing [15] where the region inside the mesh is visible and everything else is invisible. This would correspond to the intersection logical operator between the mesh and the volume. [...]" The union operation isn’t really applicable because the clipping mesh doesn’t contain any volumetric data that could be rendered, thus it is not discussed in the article. - All typos mentioned by Reviewer 1 were fixed.
4 Replies

Loading