You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using quanteda_textstat to calculate readability indices on a set of 25 cleaned plain text files, bundled in a corpus named 'testingCorpus'. This is in preparation to run a large study corpus of 4,500 documents. The working directory contains the testing files and an .Rmd script, but no other files are present.
Code
This is the code currently. Texts are in the attached archive:
filelist<- list.files(getwd())
extracted_texts<- readtext(filelist) # Produces 2-column table of doc_id, textstabled_texts<-extracted_texts %>% separate(text,c("head","body"),"Title:")
tabled_texts2<-tabled_texts %>% separate(body,c("body","tail"),"Literature Cited:") # Removes the header and footer text that is not relevant.testingCorpus<- corpus(tabled_texts2, docid_field="doc_id", text_field="body")
readability_reports<- textstat_readability(testingCorpus, measure= c("Flesch", "Flesch.Kincaid","Flesch.PSK","Farr.Jenkins.Paterson","FOG","FOG.PSK","FOG.NRI","SMOG","Coleman.Liau.grade", "Linsear.Write", "ARI","Bormuth.GP", "RIX","meanSentenceLength", "meanWordSyllables"), remove_hyphens=TRUE, min_sentence_length=1, max_sentence_length=10000, intermediate=TRUE)
write_csv(readability_reports, "readability_summary.csv", na="NA", append=FALSE, col_names=TRUE, quote_escape="double", eol="\n")
Expected behavior
Flesch, standard FOG, RIX, etc. all produce values in appropriate ranges. However Farr.Jenkins.Paterson, Bormuth.MC/GP, and FOG.NRI are producing values that are outside acceptable ranges. Farr.Jenkins.Paterson scores are negative values in the -50 to -40 range (they should be on 0-100 scale like Flesch), Bormuth.GP is providing values in the millions, and FOG.NRI has values from 90 to >400.
Running indices one at a time does not change the values, so it does not seem to an error due to processor load. I downloaded the intermediate values and calculated the scores by hand in Excel using the formulas inside of quanteda. Bormuth.MC and .GP are not coming out quite right, but the other scores I calculate match what R is outputting.
I ran the same set of texts using a standalone program (Readability Studio), and the texts' scores are coming out in ranges I expect. However, when I ran correlations on the calculated scores from quanteda for indices that were in range (Flesch, ARI, etc.) the correlation scores varied greatly, from 0.3 to 0.99. I expect them to be a bit off, but not that much
I am using
quanteda_textstat
to calculate readability indices on a set of 25 cleaned plain text files, bundled in a corpus named 'testingCorpus'. This is in preparation to run a large study corpus of 4,500 documents. The working directory contains the testing files and an .Rmd script, but no other files are present.Code
This is the code currently. Texts are in the attached archive:
Expected behavior
Flesch, standard FOG, RIX, etc. all produce values in appropriate ranges. However Farr.Jenkins.Paterson, Bormuth.MC/GP, and FOG.NRI are producing values that are outside acceptable ranges. Farr.Jenkins.Paterson scores are negative values in the -50 to -40 range (they should be on 0-100 scale like Flesch), Bormuth.GP is providing values in the millions, and FOG.NRI has values from 90 to >400.
Running indices one at a time does not change the values, so it does not seem to an error due to processor load. I downloaded the intermediate values and calculated the scores by hand in Excel using the formulas inside of
quanteda.
Bormuth.MC and .GP are not coming out quite right, but the other scores I calculate match what R is outputting.I ran the same set of texts using a standalone program (Readability Studio), and the texts' scores are coming out in ranges I expect. However, when I ran correlations on the calculated scores from quanteda for indices that were in range (Flesch, ARI, etc.) the correlation scores varied greatly, from 0.3 to 0.99. I expect them to be a bit off, but not that much
System information
TestingSet.zip
The text was updated successfully, but these errors were encountered: