Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

textstat's Readability Index scores are out of range, very different from expected values #39

Open
adanieljohnson opened this issue Jun 16, 2021 · 0 comments

Comments

@adanieljohnson
Copy link

I am using quanteda_textstat to calculate readability indices on a set of 25 cleaned plain text files, bundled in a corpus named 'testingCorpus'. This is in preparation to run a large study corpus of 4,500 documents. The working directory contains the testing files and an .Rmd script, but no other files are present.

Code

This is the code currently. Texts are in the attached archive:

filelist <- list.files(getwd())
extracted_texts <- readtext(filelist) # Produces 2-column table of doc_id, texts
tabled_texts <- extracted_texts %>% separate(text,c("head","body"),"Title:")
tabled_texts2 <- tabled_texts %>% separate(body,c("body","tail"),"Literature Cited:") # Removes the header and footer text that is not relevant.

testingCorpus <- corpus(tabled_texts2, docid_field = "doc_id", text_field = "body") 

readability_reports <- textstat_readability(testingCorpus, measure = c("Flesch", "Flesch.Kincaid","Flesch.PSK","Farr.Jenkins.Paterson","FOG","FOG.PSK","FOG.NRI","SMOG","Coleman.Liau.grade", "Linsear.Write", "ARI","Bormuth.GP", "RIX","meanSentenceLength", "meanWordSyllables"), remove_hyphens = TRUE, min_sentence_length = 1, max_sentence_length = 10000, intermediate = TRUE)

write_csv(readability_reports, "readability_summary.csv", na = "NA", append = FALSE, col_names = TRUE, quote_escape = "double", eol = "\n")

Expected behavior

Flesch, standard FOG, RIX, etc. all produce values in appropriate ranges. However Farr.Jenkins.Paterson, Bormuth.MC/GP, and FOG.NRI are producing values that are outside acceptable ranges. Farr.Jenkins.Paterson scores are negative values in the -50 to -40 range (they should be on 0-100 scale like Flesch), Bormuth.GP is providing values in the millions, and FOG.NRI has values from 90 to >400.

Running indices one at a time does not change the values, so it does not seem to an error due to processor load. I downloaded the intermediate values and calculated the scores by hand in Excel using the formulas inside of quanteda. Bormuth.MC and .GP are not coming out quite right, but the other scores I calculate match what R is outputting.

I ran the same set of texts using a standalone program (Readability Studio), and the texts' scores are coming out in ranges I expect. However, when I ran correlations on the calculated scores from quanteda for indices that were in range (Flesch, ARI, etc.) the correlation scores varied greatly, from 0.3 to 0.99. I expect them to be a bit off, but not that much

System information

R version 4.1.0 (2021-05-18)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Big Sur 10.16

Matrix products: default
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] nsyllable_1.0             data.table_1.14.0        
 [3] quanteda.textstats_0.94.1 readtext_0.80            
 [5] forcats_0.5.1             stringr_1.4.0            
 [7] dplyr_1.0.6               purrr_0.3.4              
 [9] readr_1.4.0               tidyr_1.1.3              
[11] tibble_3.1.2              ggplot2_3.3.3            
[13] tidyverse_1.3.1           magrittr_2.0.1           
[15] quanteda_3.0.0           

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.6         lubridate_1.7.10   lattice_0.20-44    assertthat_0.2.1  
 [5] digest_0.6.27      utf8_1.2.1         R6_2.5.0           cellranger_1.1.0  
 [9] backports_1.2.1    reprex_2.0.0       evaluate_0.14      httr_1.4.2        
[13] pillar_1.6.1       rlang_0.4.11       readxl_1.3.1       rstudioapi_0.13   
[17] Matrix_1.3-3       rmarkdown_2.8      munsell_0.5.0      tinytex_0.32      
[21] broom_0.7.6        compiler_4.1.0     modelr_0.1.8       xfun_0.23         
[25] pkgconfig_2.0.3    htmltools_0.5.1.1  tidyselect_1.1.1   fansi_0.5.0       
[29] crayon_1.4.1       dbplyr_2.1.1       withr_2.4.2        grid_4.1.0        
[33] jsonlite_1.7.2     gtable_0.3.0       lifecycle_1.0.0    DBI_1.1.1         
[37] scales_1.1.1       RcppParallel_5.1.4 cli_2.5.0          stringi_1.6.2     
[41] fs_1.5.0           xml2_1.3.2         ellipsis_0.3.2     stopwords_2.2     
[45] generics_0.1.0     vctrs_0.3.8        fastmatch_1.1-0    tools_4.1.0       
[49] glue_1.4.2         proxyC_0.2.0       hms_1.1.0          yaml_2.2.1        
[53] colorspace_2.0-1   rvest_1.0.0        knitr_1.33         haven_2.4.1     

TestingSet.zip

kbenoit added a commit that referenced this issue Nov 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant