You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have modified the ADE20K scene parsing benchmark produced by MIT. The original dataset contains 150 categoried(151 include background) and set background=0 to be ignored.
I have chosen 17 categories and set the previous background(label as 0) to 255, and set the other categories as 0(background). Thus ignore_class = 255
I count the pixels in the 20210 images after the modification. <category idx, number of pixel counts>
0,1637454744
1,737138625
2,503599787
3,414602893
4,290185267
5,211497624
6,93447406
7,84733791
8,78265150
9,74475972
10,70227430
11,55492152
12,28967419
13,28361240
14,24627334
15,9994821
16,7816482
17,3719357
255,411637498
The background is 0.6 of all the other categories in total(1-17)
However, the background is 440 times the 17th category. (1637454744 / 3719357= 440.3)
(16th: 1637454744 / 7816482 = 209.5)
Does anyone have any suggestion on how to set the weight for these categories?
Should I adjust the weight for only a few categories or all of the 17 categories?
weights = tf.to_float(tf.equal(scaled_labels, 0)) * label0_weight + tf.to_float(tf.equal(scaled_labels, 1)) * label1_weight + tf.to_float(tf.equal(scaled_labels, ignore_label)) * 0.0
where you need to tune the label0_weight and label1_weight (e.g., set label0_weight=1 and increase label1_weight).
No description provided.
The text was updated successfully, but these errors were encountered: