everyone
since 16 Dec 2023">EveryoneRevisionsBibTeXCC BY 4.0
We thank the reviewers for their valuable feedback. We reflected on it to improve our paper. This document details how we have addressed individual issues raised in the reviews.
The EWA and SPTF results in Figure 1 appear pixel-for-pixel identical.
We verified that the images presented in the paper are correct. Therefore, we adjusted the caption to clarify that we consider EWA is a high-quality but costly method. Hence, the hardly noticeable differences are favorable for our computationally cheaper method. Additionally, we increased the contrast in the FLIP error maps to better guide the reader toward the differences.
Eq. 2 looks like an application of the Calculus and not like an application of Green's theorem.
Indeed, Eq. 2 does involve an application of the Calculus for the texture function $M$ along each vertical line with respect to a horizontal baseline ($\mathrm{d}u$). The function $M$ can be expressed as $M(u,v) = \int_0^{v}{\mathcal{P} (u,v^\prime)\mathrm{d}v^\prime}$, where $(u,v)$ is the end point of this vertical line and $\mathcal{P}(u,v^\prime)$ is the texel value at $(u,v^\prime)$. Green's theorem relates this line integral $M$ around a boundary $C$ to an area integral over the region $Q$ bounded by $C$ as $$\iint_Q{\mathop {\underbrace{\mathcal{P} (u,v)}} \limits_{\frac{\partial M}{\partial v}}\mathrm{d}u \mathrm{d}v} = -\oint_C M(u,v) \mathrm{d}u.$$
We added a clarification to justify our use of Green's Theorem when deriving Eq. 2 in Section 3.1.
The object in the video is commonly known as a 'Snow Globe'.
We renamed the object in the video to a "Snow Globe".
The coordinate transformations in Section 3.3 and Figure.
We clarify that above Eq.4 in Section 3.3 and Figure 4, the SSAT texture is one row larger than the original texture and we added a new Eq. 4 that describes the associated coordinate transformation. Furthermore, we integrated this transformation into Eq. 7.
Motivation for the additional vertical integration.
We added a new Figure 5 to illustrate how SSAT works for a slope $\lambda > 1$ and why an additional SAT sample is needed to complete the intended integral.
The step size equation in the text is confusing in Section 3.3.
In Section 3.3 (Step Size), we corrected a typo in the step size equation $s = 4/k$ and we adjusted the derived values in Section 4.1 (Memory Analysis) accordingly.
A typo in Fig. 6.
We rectified the typo "EWI" to "EWA" in Fig. 6.
In Table 1, the relative memory overhead does not take into account the additional pixels of the SSAT textures.
In the caption of Table 1, we clarify that the additional row of each SSAT table adds an asymptotically negligible overhead depending on the texture resolution. Furthermore, below Eq. 7 we comment that the SAT does not require an additional computation nor memory since it is already available as a special case of the SSAT for zero slope. 123
Source code.
We will consider releasing the source code at a later point.
Temporal stability
We added a new temporal analysis to Section 4.1. We measure the filtering error with respect to the ground truth during a smooth camera motion and observe both a lower absolute error and a lower variation when compared to ANISO16 which itself is widely used and accepted as temporally stable for many applications including a new Figure 9(a). Furthermore, we added a preview of this experiment to our supplemental video.
Efficient and precise texture filtering is essential in various applications. However, there is often a trade-off between coarse real-time approximations and accurate computationally expensive supersampling. We introduce a novel efficient texture-filtering method over arbitrary quadrilateral footprints, achieving high accuracy at a low computational cost. We achieve this by pre-computing integration tables that sparsely sample the space of possible footprints. Finally, we compare the qualitative and computational performance of our method to commonly used techniques and demonstrate various applications for high-quality real-time image synthesis, including normal filtering, soft shadow mapping, and glint rendering.