Skip to content

Commit

Permalink
cbindlist
Browse files Browse the repository at this point in the history
add cbind by reference, timing

R prototype of mergelist

wording

use lower overhead funs

stick to int32 for now, correct R_alloc

bmerge C refactor for codecov and one loop for speed

address revealed codecov gaps

refactor vecseq for codecov

seqexp helper, some alloccol export on C

bmerge codecov, types handled in R bmerge already

better comment seqexp

bmerge mult=error #655

multiple new C utils

swap if branches

explain new C utils

comments mostly

reduce conflicts to PR #4386

comment C code

address multiple matches during update-on-join #3747

Revert "address multiple matches during update-on-join #3747"

This reverts commit b64c0c3.

merge.dt has temporarily mult arg, for testing

minor changes to cbindlist c

dev mergelist, for single pair now

add quiet option to cc()

mergelist tests

add check for names to perhaps.dt

rm mult from merge.dt method

rework, clean, polish multer, fix righ and full joins

make full join symmetric

mergepair inner function to loop on

extra check for symmetric

mergelist manual

ensure no df-dt passed where list expected

comments and manual

handle 0 cols tables

more tests

more tests and debugging

move more logic closer to bmerge, simplify mergepair

more tests

revert not used changes

reduce not needed checks, cleanup

copy arg behavior, manual, no tests yet

cbindlist manual, export both

cleanup processing bmerge to dtmatch

test function match order for easier preview

vecseq gets short-circuit

batch test allow browser

big cleanup

remmove unneeded stuff, reduce diff

more cleanup, minor manual fixes

add proper test scripts

Merge branch 'master' into cbind-merge-list

comment out not used code for coverage

more tests, some nocopy opts

rename sql test script, should fix codecov

simplify dtmatch inner branch

more precise copy, now copy only T or F

unused arg not yet in api, wording

comments and refer issues

codecov

hasindex coverage

codecov gap

tests for join using key, cols argument

fix missing import forderv

more tests, improve missing on handling

more tests for order of inner and full join for long keys

new allow.cartesian option, #4383, #914

reduce diff, improve codecov

reduce diff, comments

need more DT, not lists, mergelist 3+ tbls

proper escape heavy check

unit tests

more tests, address overalloc failure

mergelist and cbindlist retain index

manual, examples

fix manual

minor clarify in manual

retain keys, right outer join for snowflake schema joins

duplicates in cbindlist

recycling in cbindlist

escape 0 input in copyCols

empty input handling

closing cbindlist

vectorized _on_ and _join.many_ arg

rename dtmatch to dtmerge

vectorized args: how, mult
push down input validation
add support for cross join, semi join, anti join

full join, reduce overhead for mult=error

mult default value dynamic

fix manual

add "see details" to Rd

mention shared on in arg description

amend feedback from Michael

semi and anti joins will not reorder x columns

Merge branch 'master' into cbind-merge-list

spelling, thx to @jan-glx

check all new funs used and add comments

bugfix, sort=T needed for now

Merge branch 'master' into cbind-merge-list

Update NEWS.md

Merge branch 'master' into cbind-merge-list

Merge branch 'master' into cbind-merge-list

NEWS placement

numbering

ascArg->order

Merge remote-tracking branch 'origin/cbind-merge-list' into cbind-merge-list

attempt to restore from master

Update to stopf() error style

Need isFrame for now

More quality checks: any(!x)->!all(x); use vapply_1{b,c,i}

really restore from master

try to PROTECT() before duplicate()

update error message in test

appease the rchk gods

extraneous space

missing ';'

use catf

simplify perhapsDataTableR

move sqlite.Rraw.manual into other.Rraw

simplify for loop

Merge remote-tracking branch 'origin/cbind-merge-list' into cbind-merge-list
  • Loading branch information
MichaelChirico committed Aug 29, 2024
1 parent 874e7af commit 962a3a0
Show file tree
Hide file tree
Showing 3 changed files with 137 additions and 7 deletions.
23 changes: 16 additions & 7 deletions R/data.table.R
Original file line number Diff line number Diff line change
Expand Up @@ -517,13 +517,22 @@ replace_dot_alias = function(e) {
if (!byjoin || nqbyjoin) {
# Really, `anyDuplicated` in base is AWESOME!
# allow.cartesian shouldn't error if a) not-join, b) 'i' has no duplicates
if (verbose) {last.started.at=proc.time();catf("Constructing irows for '!byjoin || nqbyjoin' ... ");flush.console()}
irows = if (allLen1) f__ else vecseq(f__,len__,
if (allow.cartesian ||
notjoin || # #698. When notjoin=TRUE, ignore allow.cartesian. Rows in answer will never be > nrow(x).
!anyDuplicated(f__, incomparables = c(0L, NA_integer_))) {
NULL # #742. If 'i' has no duplicates, ignore
} else as.double(nrow(x)+nrow(i))) # rows in i might not match to x so old max(nrow(x),nrow(i)) wasn't enough. But this limit now only applies when there are duplicates present so the reason now for nrow(x)+nrow(i) is just to nail it down and be bigger than max(nrow(x),nrow(i)).
if (verbose) {last.started.at=proc.time();cat("Constructing irows for '!byjoin || nqbyjoin' ... ");flush.console()}
irows = if (allLen1) f__ else {
join.many = getOption("datatable.join.many") # #914, default TRUE for backward compatibility
anyDups = if (!join.many && length(f__)==1L && len__==nrow(x)) {
NULL # special case of scalar i match to const duplicated x, not handled by anyDuplicate: data.table(x=c(1L,1L))[data.table(x=1L), on="x"]
} else if (!notjoin && ( # #698. When notjoin=TRUE, ignore allow.cartesian. Rows in answer will never be > nrow(x).
!allow.cartesian ||
!join.many))
as.logical(anyDuplicated(f__, incomparables = c(0L, NA_integer_)))
limit = if (!is.null(anyDups) && anyDups) { # #742. If 'i' has no duplicates, ignore
if (!join.many) stopf("Joining resulted in many-to-many join. Perform quality check on your data, use mult!='all', or set 'datatable.join.many' option to TRUE to allow rows explosion.")
else if (!allow.cartesian && !notjoin) as.double(nrow(x)+nrow(i))
else internal_error("checking allow.cartesian and join.many, unexpected else branch reached") # nocov
}
vecseq(f__, len__, limit)
} # rows in i might not match to x so old max(nrow(x),nrow(i)) wasn't enough. But this limit now only applies when there are duplicates present so the reason now for nrow(x)+nrow(i) is just to nail it down and be bigger than max(nrow(x),nrow(i)).
if (verbose) {cat(timetaken(last.started.at),"\n"); flush.console()}
# Fix for #1092 and #1074
# TODO: implement better version of "any"/"all"/"which" to avoid
Expand Down
120 changes: 120 additions & 0 deletions R/mergelist.R
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,123 @@ cbindlist = function(l, copy=TRUE) {
setDT(ans)
ans
}

# when 'on' is missing then use keys, used only for inner and full join
onkeys = function(x, y) {
if (is.null(x) && !is.null(y)) y
else if (!is.null(x) && is.null(y)) x
else if (!is.null(x) && !is.null(y)) {
if (length(x)>=length(y)) intersect(y, x) ## align order to shorter|rhs key
else intersect(x, y)
} else NULL # nocov ## internal error is being called later in mergepair
}
someCols = function(x, cols, drop=character(), keep=character(), retain.order=FALSE) {
keep = colnamesInt(x, keep)
drop = colnamesInt(x, drop)
cols = colnamesInt(x, cols)
ans = union(keep, setdiff(cols, drop))
if (!retain.order) return(ans)
intersect(colnamesInt(x, NULL), ans)
}
hasindex = function(x, by, retGrp=FALSE) {
index = attr(x, "index", TRUE)
if (is.null(index)) return(FALSE)
idx_name = paste0("__",by,collapse="")
idx = attr(index, idx_name, TRUE)
if (is.null(idx)) return(FALSE)
if (!retGrp) return(TRUE)
return(!is.null(attr(idx, "starts", TRUE)))
}

# fdistinct applies mult='first|last'
# for mult='first' it is unique(x, by=on)[, c(on, cols), with=FALSE]
# it may not copy when copy=FALSE and x is unique by 'on'
fdistinct = function(x, on=key(x), mult=c("first","last"), cols=seq_along(x), copy=TRUE) {
if (!perhaps.data.table(x))
stopf("'x' must be data.table")
if (!is.character(on) || !length(on) || anyNA(on) || !all(on %chin% names(x)))
stopf("'on' must be character column names of 'x' argument")
mult = match.arg(mult)
if (is.null(cols))
cols = seq_along(x)
else if (!(is.character(cols) || is.integer(cols)) || !length(cols) || anyNA(cols))
stopf("'cols' must be non-zero length, non-NA, integer or character columns of 'x' argument")
if (!isTRUEorFALSE(copy))
stopf("'%s' must be TRUE or FALSE", "copy")
## do not compute sort=F for mult="first" if index (sort=T) already available, sort=T is needed only for mult="last"
## this short circuit will work after #4386 because it requires retGrp=T
#### sort = mult!="first" || hasindex(x, by=on, retGrp=TRUE)
sort = TRUE ## above line does not work for the moment, test 302.02
o = forderv(x, by=on, sort=sort, retGrp=TRUE)
if (attr(o, "maxgrpn", TRUE) <= 1L) {
ans = .shallow(x, someCols(x, cols, keep=on), retain.key=TRUE)
if (copy) ans = copy(ans)
return(ans)
}
f = attr(o, "starts", exact=TRUE)
if (mult=="last") {
if (!sort) internal_error("sort must be TRUE when computing mult='last'") # nocov
f = c(f[-1L]-1L, nrow(x)) ## last of each group
}
if (length(o)) f = o[f]
if (sort && length(o <- forderv(f))) f = f[o] ## this rolls back to original order
.Call(CsubsetDT, x, f, someCols(x, cols, keep=on))
}

# extra layer over bmerge to provide ready to use row indices (or NULL for 1:nrow)
# NULL to avoid extra copies in downstream code, it turned out that avoiding copies precisely is costly and enormously complicates code, need #4409 and/or handle 1:nrow in subsetDT
dtmerge = function(x, i, on, how, mult, join.many, void=FALSE, verbose) {
nomatch = switch(how, "inner"=, "semi"=, "anti"=, "cross"= 0L, "left"=, "right"=, "full"= NA_integer_)
nomatch0 = identical(nomatch, 0L)
if (is.null(mult))
mult = switch(how, "semi"=, "anti"= "last", "cross"= "all", "inner"=, "left"=, "right"=, "full"= "error")
if (void && mult!="error")
internal_error("void must be used with mult='error'") # nocov
if (how=="cross") { ## short-circuit bmerge results only for cross join
if (length(on) || mult!="all" || !join.many)
stopf("cross join must be used with zero-length on, mult='all', join.many=TRUE")
if (void)
internal_error("cross join must be used with void=FALSE") # nocov
ans = list(allLen1=FALSE, starts=rep.int(1L, nrow(i)), lens=rep.int(nrow(x), nrow(i)), xo=integer())
} else {
if (!length(on))
stopf("'on' must be non-zero length character vector")
if (mult=="all" && (how=="semi" || how=="anti"))
stopf("semi and anti joins must be used with mult!='all'")
icols = colnamesInt(i, on, check_dups=TRUE)
xcols = colnamesInt(x, on, check_dups=TRUE)
ans = bmerge(i, x, icols, xcols, roll=0, rollends=c(FALSE, TRUE), nomatch=nomatch, mult=mult, ops=rep.int(1L, length(on)), verbose=verbose)
if (void) { ## void=T is only for the case when we want raise error for mult='error', and that would happen in above line
return(invisible(NULL))
} else if (how=="semi" || how=="anti") { ## semi and anti short-circuit
irows = which(if (how=="semi") ans$lens!=0L else ans$lens==0L) ## we will subset i rather than x, thus assign to irows, not to xrows
if (length(irows)==length(ans$lens)) irows = NULL
return(list(ans=ans, irows=irows))
} else if (mult=="all" && !ans$allLen1 && !join.many && ## join.many, like allow.cartesian, check
!(length(ans$starts)==1L && ans$lens==nrow(x)) && ## special case of scalar i match to const duplicated x, not handled by anyDuplicate: data.table(x=c(1L,1L))[data.table(x=1L), on="x"]
anyDuplicated(ans$starts, incomparables=c(0L,NA_integer_))
)
stopf("Joining resulted in many-to-many join. Perform quality check on your data, use mult!='all', or set 'datatable.join.many' option to TRUE to allow rows explosion.")
}

## xrows, join-to
xrows = if (ans$allLen1) ans$starts else vecseq(ans$starts, ans$lens, NULL)
if (nomatch0 && ans$allLen1) xrows = xrows[as.logical(ans$lens)]
len.x = length(xrows) ## as of now cannot optimize to NULL, search for #4409 here

## irows, join-from
irows = if (!(ans$allLen1 && (!nomatch0 || len.x==length(ans$starts)))) seqexp(ans$lens)
len.i = if (is.null(irows)) nrow(i) else length(irows)

if (length(ans$xo) && length(xrows))
xrows = ans$xo[xrows]
len.x = length(xrows)

if (len.i!=len.x)
internal_error("dtmerge out len.i != len.x") # nocov

return(list(ans=ans, irows=irows, xrows=xrows))
}

seqexp = function(x) .Call(Cseqexp, x)
perhaps.data.table = function(x) .Call(CperhapsDataTableR, x)
1 change: 1 addition & 0 deletions src/init.c
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@ R_CallMethodDef callMethods[] = {
{"CconvertDate", (DL_FUNC)&convertDate, -1},
{"Cnotchin", (DL_FUNC)&notchin, -1},
{"Ccbindlist", (DL_FUNC) &cbindlist, -1},
{"CperhapsDataTableR", (DL_FUNC) &perhapsDataTableR, -1},
{"Cwarn_matrix_column_r", (DL_FUNC)&warn_matrix_column_r, -1},
{NULL, NULL, 0}
};
Expand Down

0 comments on commit 962a3a0

Please sign in to comment.