-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusion Matrix with very low accuracy #6
Comments
Thanks for the detailed message. I can't comment on the warnings you are getting during training, but clearly the I suspect you start by running that line-by-line from an IDE (spyder or similar) with your inputs, examining the outputs of each line as you go, in order to find the bug. These tools were made for a USGS CDI workshop in 2018. They won't be supported indefinitely by me without further funding (sorry, I'm 100% soft money) |
Thank you Dan, I will follow your advise. I hope I can find, understand and fix the bug. Have a lovely week! |
@jnifosi see relevant update in Also, you are refering to |
Thank you so much. I will take a look at it right away! |
Hello!
My research team and I are having lots of difficulties figuring it out why we're having trouble with the confusion matrix. I run the code and the tiles that were generated were mostly correct in their classification. Our accuracy is between 0.06 and 0.08 and our matrix (attached here) looks terrible. It does not make sense in any way, not even in the most obvious and common tile, which is water (very blue distinguishing tile), does not get classified as water (0%!!). I attached a screenshot of test and train folders showing water tiles and agriculture tiles.
I decided to run the code from the very beginning and I have noticed some error messages on each of the Deep Neural Network steps I run. The main errors I got are copied below. I believe this errors could shed some light to inform me what is going on. I am attaching to this email the confusion matrix as well as a Word document showing the full lines I am getting, but the error lines are summarized below for your convenience..
Thank you in advance for all the help and for your time and have a wonderful day!
Notes:
-I am deleting the bottlenecks every time I run the code, so that doesn't seem to be the problem so far.
-These are the codes I am using:
GROUNDTRUTHING python create_groundtruth\label_1image_crf.py -w 900 -s 0.25
TILES python create_library\retile.py -t 96 -a 0.8 -b 0.9
TRAINING DDN
python train_dcnn_tfhub\retrain.py --image_dir Contemporary_outside\train\tile_95 --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v2_100_96/classification/1 --how_many_training_steps 1000 --learning_rate 0.001 --output_labels op_labels.txt --output_graph OStrainingDemo_v1_96_1000_0001.pb --bottleneck_dir bottlenecks --summaries_dir summaries
EVALUATION python eval_imrecog\test_class_tiles.py -n 1000
-The very low accuracy I am getting:
mean accuracy 0.076265 (N=1000)
mean f-score 0.117888 (N=1000)
mean prob. 0.698648 (N=1000)
-These are the main outstanding lines I have read, separated by command type:
Errors in code trial 12_v2.docx
Groundthruthing:
Possible sign loss when converting negative image of type int32 to positive image of type uint8.
FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use
arr[tuple(seq)]
instead ofarr[seq]
. In the future this will be interpreted as an array index,arr[np.array(seq)]
, which will result either in an error or a different result.Creating tiles:
FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use
arr[tuple(seq)]
instead ofarr[seq]
. In the future this will be interpreted as an array index,arr[np.array(seq)]
, which will result either in an error or a different result.Retraining the model:
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
Evaluation:
n=100
Accuracy was too low: mean accuracy 0.081250 (N=100), mean f-score 0.124214 (N=100), mean prob. 0.704851 (N=100)
n=1000
UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples.
n=10000
ValueError: not enough values to unpack (expected 3, got 0)
Errors in code trial 12_v2.docx
The text was updated successfully, but these errors were encountered: