Skip to content

Commit

Permalink
add homography accuracy evaluation
Browse files Browse the repository at this point in the history
  • Loading branch information
lzx551402 committed Apr 10, 2020
1 parent fec776f commit c70a938
Show file tree
Hide file tree
Showing 4 changed files with 55 additions and 33 deletions.
30 changes: 16 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,28 +60,31 @@ cd /local/aslfeat && python hseq_eval.py --config configs/hseq_eval.yaml
At the end of running, we report the average number of features, repeatability, precision, matching score, recall and mean matching accuracy (a.k.a. MMA). The evaluation results will be displayed as:
```bash
0 /data/hpatches-sequences-release/v_abstract
5000 [0.7105522 0.7984268 0.5163258 0.71908796 0.7664455 ]
5000 [0.6577944 0.7984268 0.49771258 0.73826474 0.7664455 0.4]
1 /data/hpatches-sequences-release/v_adam
1620 [1.25235 0.88788044 0.72517836 0.5968998 0.8819124 ]
1620 [0.66010183 0.88788044 0.45460123 0.6796689 0.8819124 0.8]
...
----------i_eval_stats----------
...
----------v_eval_stats----------
...
----------all_eval_stats----------
avg_n_feat 3916
avg_rep 0.7831441
avg_precision 0.7396421
avg_matching_score 0.4628032
avg_recall 0.6226283
avg_MMA 0.7225959
avg_n_feat 3924
avg_rep 0.6222487
avg_precision 0.7397446
avg_matching_score 0.4186731
avg_recall 0.63664365
avg_MMA 0.7226566
avg_homography_accuracy 0.72962976
```

The results for repeatability and matching score is different from what we have reported in the paper, as we now apply a [symmetric check](https://github.com/lzx551402/ASLFeat/commit/0df33b75453d73af28927f203a2892a0acf6956f) when counting the number of covisible features (referring to [SuperPoint](https://github.com/rpautrat/SuperPoint)). This change may not influence the conclusion in the section of ablation study, but would be useful for making comparision with other relavant papers. We thank for [Sida Peng](https://pengsida.net/) for pointing this out when reproducing this work.

To plot the results (i.e., reproduce Fig.3 in the paper), please include the [cached files](cache/), use the tool provided by [D2-Net](https://github.com/mihaidusmanu/d2-net/blob/master/hpatches_sequences/HPatches-Sequences-Matching-Benchmark.ipynb).

### 2. Benchmark on [FM-Bench](http://jwbian.net/fm-bench)

Download the (customized) evaluation pipeline, and follow the instruction to download the [testing data](https://1drv.ms/f/s!AiV6XqkxJHE2g3ZC4zYYR05eEY_m):
Download the (customized for data loading and randomness eschewing) evaluation pipeline, and follow the instruction to download the [testing data](https://1drv.ms/f/s!AiV6XqkxJHE2g3ZC4zYYR05eEY_m):
```bash
git clone https://github.com/lzx551402/FM-Bench.git
```
Expand Down Expand Up @@ -174,16 +177,15 @@ cd /local/aslfeat && python evaluations.py --config configs/imw2020_eval.yaml

1. Training data is provided in [GL3D](https://github.com/lzx551402/GL3D).

2. You might be also interested in a 3D local feature, [D3Feat](https://github.com/XuyangBai/D3Feat/).
2. You might be also interested in a 3D local feature ([D3Feat](https://github.com/XuyangBai/D3Feat/)).

# Acknowledgements
## Acknowledgements

1. The backbone networks and the learning scheme is heavily borrowed from [D2-Net](https://github.com/mihaidusmanu/d2-net).
1. The backbone networks and the learning scheme are heavily borrowed from [D2-Net](https://github.com/mihaidusmanu/d2-net).

2. We thank you the authors of [R2D2](https://github.com/naver/r2d2) for sharing their evaluation results on HPatches that helped us plot Fig.1. The updated results of R2D2 are even more excited.
2. We thank for the authors of [R2D2](https://github.com/naver/r2d2) for sharing their evaluation results on HPatches that helped us plot Fig.1. The updated results of R2D2 are even more excited.

3. We refer to the public implementation of [SuperPoint](https://github.com/rpautrat/SuperPoint) for organizing the code and implementing the evaluation metrics.

4. We implement the modulated DCN referring to [this](https://github.com/DHZS/tf-deformable-conv-layer/blob/master/nets/deformable_conv_layer.py). The current implementation is not efficient, and we expect a native implementation in TensorFlow to be available in the future.

5. We thank for [Sida Peng](https://pengsida.net/) for sharing his experience in reproducing this work, also pointing out the flaws in our implementation of evaluation metrics.
13 changes: 8 additions & 5 deletions hseq_eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def matcher(consumer_queue, sess, evaluator, config):
continue
ref_img, ref_kpts, ref_descs, seq_info = record[0]

eval_stats = np.array((0, 0, 0, 0, 0, 0, 0), np.float32)
eval_stats = np.array((0, 0, 0, 0, 0, 0, 0, 0), np.float32)

seq_idx = seq_info[0]
seq_name = seq_info[1]
Expand All @@ -73,7 +73,7 @@ def matcher(consumer_queue, sess, evaluator, config):
num_feat = min(ref_kpts.shape[0], test_kpts.shape[0])
if num_feat > 0:
mma_putative_matches = evaluator.feature_matcher(
sess, ref_descs, test_descs, test_kpts)
sess, ref_descs, test_descs)
else:
mma_putative_matches = []
mma_inlier_matches = evaluator.get_inlier_matches(
Expand All @@ -83,7 +83,7 @@ def matcher(consumer_queue, sess, evaluator, config):
# get covisible keypoints
ref_mask, test_mask = evaluator.get_covisible_mask(ref_kpts, test_kpts,
ref_img.shape, test_img.shape,
gt_homo)
gt_homo, scaling)
cov_ref_coord, cov_test_coord = ref_kpts[ref_mask], test_kpts[test_mask]
cov_ref_feat, cov_test_feat = ref_descs[ref_mask], test_descs[test_mask]
num_cov_feat = (cov_ref_coord.shape[0] + cov_test_coord.shape[0]) / 2
Expand All @@ -92,10 +92,12 @@ def matcher(consumer_queue, sess, evaluator, config):
# establish putative matches
if num_cov_feat > 0:
putative_matches = evaluator.feature_matcher(
sess, cov_ref_feat, cov_test_feat, cov_test_coord)
sess, cov_ref_feat, cov_test_feat)
else:
putative_matches = []
num_putative = max(len(putative_matches), 1)
# get homography accuracy
correctness = evaluator.compute_homography_accuracy(cov_ref_coord, cov_test_coord, ref_img.shape, putative_matches, gt_homo, scaling)
# get inlier matches
inlier_matches = evaluator.get_inlier_matches(
cov_ref_coord, cov_test_coord, putative_matches, gt_homo, scaling)
Expand All @@ -107,7 +109,8 @@ def matcher(consumer_queue, sess, evaluator, config):
num_inlier / max(num_putative, 1), # precision
num_inlier / max(num_cov_feat, 1), # matching score
num_inlier / max(gt_num, 1), # recall
num_mma_inlier / max(num_mma_putative, 1))) / 5 # MMA
num_mma_inlier / max(num_mma_putative, 1),
correctness)) / 5 # MMA

print(int(eval_stats[1]), eval_stats[2:])
evaluator.stats['all_eval_stats'] += eval_stats
Expand Down
11 changes: 3 additions & 8 deletions models/cnn_wrapper/aslfeat.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ def setup(self):
prep_dense_feat_map = tmp_feat_map

if det_config['use_peakiness']:
alpha, beta = self.our_score(prep_dense_feat_map, ksize=3,
alpha, beta = self.peakiness_score(prep_dense_feat_map, ksize=3,
need_norm=det_config['need_norm'],
dilation=scale[idx], name=tmp_name)
else:
Expand Down Expand Up @@ -102,7 +102,7 @@ def setup(self):
[kpt_inds[:, :, 1], kpt_inds[:, :, 0]], axis=-1, name='kpts')
self.endpoints['scores'] = tf.identity(kpt_score, name='scores')

def our_score(self, inputs, ksize=3, all_softplus=True, need_norm=True, dilation=1, name='conv'):
def peakiness_score(self, inputs, ksize=3, need_norm=True, dilation=1, name='conv'):
if need_norm:
from tensorflow.python.training.moving_averages import assign_moving_average
with tf.compat.v1.variable_scope('tower', reuse=self.reuse):
Expand All @@ -124,12 +124,7 @@ def our_score(self, inputs, ksize=3, all_softplus=True, need_norm=True, dilation
avg_inputs = tf.nn.pool(pad_inputs, [ksize, ksize],
'AVG', 'VALID', dilation_rate=[dilation, dilation])
alpha = tf.math.softplus(inputs - avg_inputs)

if all_softplus:
beta = tf.math.softplus(inputs - tf.reduce_mean(inputs, axis=-1, keepdims=True))
else:
channel_wise_max = tf.reduce_max(inputs, axis=-1, keepdims=True)
beta = inputs / (channel_wise_max + 1e-6)
beta = tf.math.softplus(inputs - tf.reduce_mean(inputs, axis=-1, keepdims=True))
return alpha, beta

def d2net_score(self, inputs, ksize=3, need_norm=True, dilation=1, name='conv'):
Expand Down
34 changes: 28 additions & 6 deletions utils/evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ def __init__(self, config):
self.err_thld = config['err_thld']
self.matches = self.bf_matcher_graph()
self.stats = {
'i_eval_stats': np.array((0, 0, 0, 0, 0, 0, 0), np.float32),
'v_eval_stats': np.array((0, 0, 0, 0, 0, 0, 0), np.float32),
'all_eval_stats': np.array((0, 0, 0, 0, 0, 0, 0), np.float32),
'i_eval_stats': np.array((0, 0, 0, 0, 0, 0, 0, 0), np.float32),
'v_eval_stats': np.array((0, 0, 0, 0, 0, 0, 0, 0), np.float32),
'all_eval_stats': np.array((0, 0, 0, 0, 0, 0, 0, 0), np.float32),
}

def homo_trans(self, coord, H):
Expand Down Expand Up @@ -44,12 +44,15 @@ def mnn_matcher(self, sess, descriptors_a, descriptors_b):
matches = sess.run(self.matches, input_dict)
return matches.T

def feature_matcher(self, sess, ref_feat, test_feat, test_coord=None):
def feature_matcher(self, sess, ref_feat, test_feat):
matches = self.mnn_matcher(sess, ref_feat, test_feat)
matches = [cv2.DMatch(matches[i][0], matches[i][1], 0) for i in range(matches.shape[0])]
return matches

def get_covisible_mask(self, ref_coord, test_coord, ref_img_shape, test_img_shape, gt_homo):
def get_covisible_mask(self, ref_coord, test_coord, ref_img_shape, test_img_shape, gt_homo, scaling=1.):
ref_coord = ref_coord / scaling
test_coord = test_coord / scaling

proj_ref_coord = self.homo_trans(ref_coord, gt_homo)
proj_test_coord = self.homo_trans(test_coord, np.linalg.inv(gt_homo))

Expand Down Expand Up @@ -92,6 +95,24 @@ def get_gt_matches(self, ref_coord, test_coord, gt_homo, scaling=1.):
gt_num = (gt_num0 + gt_num1) / 2
return gt_num

def compute_homography_accuracy(self, ref_coord, test_coord, ref_img_shape, putative_matches, gt_homo, scaling=1.):
ref_coord = np.float32([ref_coord[m.queryIdx] for m in putative_matches]) / scaling
test_coord = np.float32([test_coord[m.trainIdx] for m in putative_matches]) / scaling

pred_homo, _ = cv2.findHomography(ref_coord, test_coord, cv2.RANSAC)
if pred_homo is None:
correctness = 0
else:
corners = np.array([[0, 0],
[ref_img_shape[1] / scaling - 1, 0],
[0, ref_img_shape[0] / scaling - 1],
[ref_img_shape[1] / scaling - 1, ref_img_shape[0] / scaling - 1]])
real_warped_corners = self.homo_trans(corners, gt_homo)
warped_corners = self.homo_trans(corners, pred_homo)
mean_dist = np.mean(np.linalg.norm(real_warped_corners - warped_corners, axis=1))
correctness = float(mean_dist <= self.err_thld)
return correctness

def print_stats(self, key):
avg_stats = self.stats[key] / max(self.stats[key][0], 1)
avg_stats = avg_stats[1:]
Expand All @@ -101,4 +122,5 @@ def print_stats(self, key):
print('avg_precision', avg_stats[2])
print('avg_matching_score', avg_stats[3])
print('avg_recall', avg_stats[4])
print('avg_MMA', avg_stats[5])
print('avg_MMA', avg_stats[5])
print('avg_homography_accuracy', avg_stats[6])

0 comments on commit c70a938

Please sign in to comment.