Keywords: RANSAC, Fundamental Matrix Estimation, Outlier Rejection
TL;DR: A reinforcement learning framework for outlier rejection in model estimation
Abstract: The rejection of outliers in observed data is the foundation for accurate model estimation.
Random sample consensus (RANSAC) is a classical algorithm aiming to find the inliers for robust model estimation.
After sampling a series of minimal sets that can support the hypothesis estimation and generating the respective hypotheses, the best hypothesis that earns the maximum consensus is chosen for the final model estimation.
However,
this strategy may face exponentially computational growth as the outlier ratio increases.
Besides, fitting a model from a minimal set may hinder the accurate model estimation especially when the inliers are extremely rare.
In contrast, a model estimated from more observations may be better than from a minimum set.
To approach such problem, we propose reinforcement sample consensus (R-SAC) to train a neural network to classify the inliers and outliers among all the correspondences with reinforcement learning.
During training, we regard the number of inliers as a reward and encourage the agent to find the optimal subset supporting the final model estimation in a unsupervised manner.
During inference, the R-SAC network is able to directly generate the inlier
set, which could significantly reduce the computational resources in sampling and is able to select a more robust model hypothesis fitted from more correspondences.
Empirical results show that our method achieves comparable performance compared with the previous supervised counterparts and remarkable efficiency especially when the outlier ratio is large.
2 Replies
Loading