Open
Description
When submitting a large number of jobs, BatchJobs still fails for me (this is somewhat similar to #58, but the number of jobs is almost 50 times higher).
I submit between 275,000 and 500,000 jobs in 1, 2, 10, and 25 chunks.
BatchJobs_1.7 BBmisc_1.9
Submitting jobs in one chunk always works, so does sending 2 chunks. 10 chunks sometimes works and sometimes doesn't, and 25 chunks never works.
If staged.queries = TRUE
(otherwise same behaviour as in #58), independent of db.options = list(pragmas = c("busy_timeout=5000", "journal_mode=WAL"))
and fs.timeout
:
- in
submitJobs()
call function itself runs fine untilreturn(invisible(ids))
- message "Might take some time, do not interrupt this!"
- after this, all jobs are killed/crash/disappear
- if I
waitForJobs()
, R segfaults
Might take some time, do not interrupt this!
Syncing registry ...
Waiting [S:550000 D:0 E:0 R:0] |+ | 0% (00:00:00)
Status for 550000 jobs at 2015-08-01 18:00:21
Submitted: 550000 (100.00%)
Started: 550000 (100.00%)
Running: 0 ( 0.00%)
Done: 0 ( 0.00%)
Errors: 0 ( 0.00%)
Expired: 550000 (100.00%)
Time: min=NAs avg=NAs max=NAs
n submitted started done error running expired t_min t_avg t_max
1 550000 550000 550000 0 0 0 550000 NA NA NA
*** buffer overflow detected ***: /usr/lib/R/bin/exec/R terminated
======= Backtrace: =========
/lib64/libc.so.6(__fortify_fail+0x37)[0x3ec4302527]
/lib64/libc.so.6[0x3ec4300410]
/lib64/libc.so.6[0x3ec42ff2c7]
/usr/lib/R/lib/libR.so(+0xfc01b)[0x2b722845901b]
/usr/lib/R/lib/libR.so(+0xfff4e)[0x2b722845cf4e]
/usr/lib/R/lib/libR.so(+0xfffbf)[0x2b722845cfbf]
Metadata
Metadata
Assignees
Labels
No labels