You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is precision_er and recall_er? Where can we add a legend for the tables to clarify? As all metrics should be defined somewhere (currently are on google drive for CAMI), we might put the respective document also here on github - and include definitions there. Other suggestions?
The text was updated successfully, but these errors were encountered:
The comment in Rubens script indicate the '*_er' values are the standard error of the mean.
Sadly embedding tables is not possible in github, unless I make a image of the table instead, which should be possible, maybe even with a legend!
I will have to look what R has to offer.
Nature methods does not accept tables as images.
Online submission seems to accept latex and word format but at some other place it wants them in word format only. I will have to reread it all, but word format seems to be the saver side as much as I like latex.
I noted in the binning readme.md, there are links to tables (not displayed but evident in raw format). Eg
https://github.com/CAMI-challenge/firstchallenge_evaluation/blob/master/binning/tables/supervised/prec_recall_combined_all_ranks_by_genome_all_ANI_all.csv
What is precision_er and recall_er? Where can we add a legend for the tables to clarify? As all metrics should be defined somewhere (currently are on google drive for CAMI), we might put the respective document also here on github - and include definitions there. Other suggestions?
The text was updated successfully, but these errors were encountered: