Abstract: Monte Carlo sampling techniques are used to estimate high-dimensional integrals that model
the physics of light transport in virtual scenes for computer graphics applications. These
methods rely on the law of large numbers to estimate expectations via simulation, typically re-
sulting in slow convergence. Their errors usually manifest as undesirable grain in the pictures
generated by image synthesis algorithms. It is well known that these errors diminish when
the samples are chosen appropriately. A well known technique for reducing error operates
by subdividing the integration domain, estimating integrals in each stratum and aggregating
these values into a stratified sampling estimate. Na ̈ıve methods for stratification, based on a
lattice (grid) are known to improve the convergence rate of Monte Carlo, but require samples
that grow exponentially with the dimensionality of the domain.
We propose a simple stratification scheme for d dimensional hypercubes using the kd-tree
data structure. Our scheme enables the generation of an arbitrary number of equal volume par-
titions of the rectangular domain, and n samples can be generated in O(n) time. Since we
do not always need to explicitly build a kd-tree, we provide a simple procedure that allows
the sample set to be drawn fully in parallel without any precomputation or storage, speeding
up sampling to O(log n) time per sample when executed on n cores. If the tree is implicitly
precomputed (O(n) storage) the parallelised run time reduces to O(1) on n cores. In addition
to these benefits, we provide an upper bound on the worst case star-discrepancy for n samples
matching that of lattice-based sampling strategies, which occur as a special case of our pro-
posed method. We use a number of quantitative and qualitative tests to compare our method
against state of the art samplers for image synthesis.
0 Replies
Loading