You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For this package all the CSV files are relatively small I think.
CSV is a heavy dependency, on startup. It pays off if you load large CSV files. I was thinking maybe you could rather use the really fast (built into Base), it seems even with compressed files:
julia> @time using DelimitedFiles
0.004993 seconds (1.15 k allocations: 82.234 KiB, 78.42% compilation time)
A downside could be: If in all real-world code you would load CSV anyway, then not effective (but neither slower this way). Another option could be, this package if often (not always) used with RCall (what I'm looking into now). And plausibly then at least you could rather use CSV reading from R than Julia's CSV. Or Python's tool where appropriate.
The text was updated successfully, but these errors were encountered:
For this package all the CSV files are relatively small I think.
CSV is a heavy dependency, on startup. It pays off if you load large CSV files. I was thinking maybe you could rather use the really fast (built into Base), it seems even with compressed files:
https://docs.julialang.org/en/v1/stdlib/DelimitedFiles/
A downside could be: If in all real-world code you would load CSV anyway, then not effective (but neither slower this way). Another option could be, this package if often (not always) used with RCall (what I'm looking into now). And plausibly then at least you could rather use CSV reading from R than Julia's CSV. Or Python's tool where appropriate.
The text was updated successfully, but these errors were encountered: