You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/source/user-guide/configs.md
+3-2
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ Comet provides the following configuration settings.
25
25
|--------|-------------|---------------|
26
26
| spark.comet.batchSize | The columnar batch size, i.e., the maximum number of rows that a batch can contain. | 8192 |
27
27
| spark.comet.caseConversion.enabled | Java uses locale-specific rules when converting strings to upper or lower case and Rust does not, so we disable upper and lower by default. | false |
28
-
| spark.comet.cast.allowIncompatible | Comet is not currently fully compatible with Spark for all cast operations. Set this config to true to allow them anyway. See compatibility guide for more information. | false |
28
+
| spark.comet.cast.allowIncompatible | Comet is not currently fully compatible with Spark for all cast operations. Set this config to true to allow them anyway. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).| false |
29
29
| spark.comet.columnar.shuffle.async.enabled | Whether to enable asynchronous shuffle for Arrow-based shuffle. | false |
30
30
| spark.comet.columnar.shuffle.async.max.thread.num | Maximum number of threads on an executor used for Comet async columnar shuffle. This is the upper bound of total number of shuffle threads per executor. In other words, if the number of cores * the number of shuffle threads per task `spark.comet.columnar.shuffle.async.thread.num` is larger than this config. Comet will use this config as the number of shuffle threads per executor instead. | 100 |
31
31
| spark.comet.columnar.shuffle.async.thread.num | Number of threads used for Comet async columnar shuffle per shuffle task. Note that more threads means more memory requirement to buffer shuffle data before flushing to disk. Also, more threads may not always improve performance, and should be set based on the number of cores available. | 3 |
@@ -64,6 +64,7 @@ Comet provides the following configuration settings.
64
64
| spark.comet.explain.native.enabled | When this setting is enabled, Comet will provide a tree representation of the native query plan before execution and again after execution, with metrics. | false |
65
65
| spark.comet.explain.verbose.enabled | When this setting is enabled, Comet will provide a verbose tree representation of the extended information. | false |
66
66
| spark.comet.explainFallback.enabled | When this setting is enabled, Comet will provide logging explaining the reason(s) why a query stage cannot be executed natively. Set this to false to reduce the amount of logging. | false |
67
+
| spark.comet.expression.allowIncompatible | Comet is not currently fully compatible with Spark for all expressions. Set this config to true to allow them anyway. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).| false |
67
68
| spark.comet.memory.overhead.factor | Fraction of executor memory to be allocated as additional non-heap memory per executor process for Comet. | 0.2 |
68
69
| spark.comet.memory.overhead.min | Minimum amount of additional memory to be allocated per executor process for Comet, in MiB. | 402653184b |
69
70
| spark.comet.nativeLoadRequired | Whether to require Comet native library to load successfully when Comet is enabled. If not, Comet will silently fallback to Spark when it fails to load the native lib. Otherwise, an error will be thrown and the Spark job will be aborted. | false |
@@ -73,7 +74,7 @@ Comet provides the following configuration settings.
73
74
| spark.comet.parquet.read.io.mergeRanges.delta | The delta in bytes between consecutive read ranges below which the parallel reader will try to merge the ranges. The default is 8MB. | 8388608 |
74
75
| spark.comet.parquet.read.parallel.io.enabled | Whether to enable Comet's parallel reader for Parquet files. The parallel reader reads ranges of consecutive data in a file in parallel. It is faster for large files and row groups but uses more resources. | true |
75
76
| spark.comet.parquet.read.parallel.io.thread-pool.size | The maximum number of parallel threads the parallel reader will use in a single executor. For executors configured with a smaller number of cores, use a smaller number. | 16 |
76
-
| spark.comet.regexp.allowIncompatible | Comet is not currently fully compatible with Spark for all regular expressions. Set this config to true to allow them anyway using Rust's regular expression engine. See compatibility guide for more information. | false |
77
+
| spark.comet.regexp.allowIncompatible | Comet is not currently fully compatible with Spark for all regular expressions. Set this config to true to allow them anyway. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).| false |
77
78
| spark.comet.scan.enabled | Whether to enable native scans. When this is turned on, Spark will use Comet to read supported data sources (currently only Parquet is supported natively). Note that to enable native vectorized execution, both this config and 'spark.comet.exec.enabled' need to be enabled. | true |
78
79
| spark.comet.scan.preFetch.enabled | Whether to enable pre-fetching feature of CometScan. | false |
79
80
| spark.comet.scan.preFetch.threadNum | The number of threads running pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching threads means more memory requirement to store pre-fetched row groups. | 2 |
Copy file name to clipboardexpand all lines: docs/templates/compatibility-template.md
+16-6
Original file line number
Diff line number
Diff line change
@@ -32,12 +32,6 @@ be used in production.
32
32
33
33
There is an [epic](https://github.com/apache/datafusion-comet/issues/313) where we are tracking the work to fully implement ANSI support.
34
34
35
-
## Regular Expressions
36
-
37
-
Comet uses the Rust regexp crate for evaluating regular expressions, and this has different behavior from Java's
38
-
regular expression engine. Comet will fall back to Spark for patterns that are known to produce different results, but
39
-
this can be overridden by setting `spark.comet.regexp.allowIncompatible=true`.
40
-
41
35
## Floating number comparison
42
36
43
37
Spark normalizes NaN and zero for floating point numbers for several cases. See `NormalizeFloatingNumbers` optimization rule in Spark.
@@ -46,6 +40,22 @@ because they are handled well in Spark (e.g., `SQLOrderingUtil.compareFloats`).
46
40
functions of arrow-rs used by DataFusion do not normalize NaN and zero (e.g., [arrow::compute::kernels::cmp::eq](https://docs.rs/arrow/latest/arrow/compute/kernels/cmp/fn.eq.html#)).
47
41
So Comet will add additional normalization expression of NaN and zero for comparison.
48
42
43
+
## Incompatible Expressions
44
+
45
+
Some Comet native expressions are not 100% compatible with Spark and are disabled by default. These expressions
46
+
will fall back to Spark but can be enabled by setting `spark.comet.expression.allowIncompatible=true`.
47
+
48
+
## Array Expressions
49
+
50
+
Comet has experimental support for a number of array expressions. These are experimental and currently marked
51
+
as incompatible and can be enabled by setting `spark.comet.expression.allowIncompatible=true`.
52
+
53
+
## Regular Expressions
54
+
55
+
Comet uses the Rust regexp crate for evaluating regular expressions, and this has different behavior from Java's
56
+
regular expression engine. Comet will fall back to Spark for patterns that are known to produce different results, but
57
+
this can be overridden by setting `spark.comet.regexp.allowIncompatible=true`.
58
+
49
59
## Cast
50
60
51
61
Cast operations in Comet fall into three levels of support:
0 commit comments