You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/spark-procedures.md
+6-7Lines changed: 6 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -533,7 +533,6 @@ Dangling deletes are always filtered out during rewriting.
533
533
|`min-input-files`| 5 | Any file group exceeding this number of files will be rewritten regardless of other criteria |
534
534
|`rewrite-all`| false | Force rewriting of all provided files overriding other options |
535
535
|`max-file-group-size-bytes`| 107374182400 (100GB) | Largest amount of data that should be rewritten in a single file group. The entire rewrite operation is broken down into pieces based on partitioning and within partitions based on size into file-groups. This helps with breaking down the rewriting of very large partitions which may not be rewritable otherwise due to the resource constraints of the cluster. |
536
-
|`max-files-to-rewrite`| null | This option sets an upper limit on the number of files eligible for rewrite operation. It can be useful for improving job stability, particularly when dealing with a large number of files. If this option is not specified, all files will be considered for rewriting |
537
536
538
537
#### Output
539
538
@@ -867,11 +866,11 @@ that provide additional information about the changes being tracked. These colum
867
866
Here is an example of corresponding results. It shows that the first snapshot inserted 2 records, and the
With the net changes, the above changelog view only contains the following row since Alice was inserted in the first snapshot and deleted in the second snapshot.
0 commit comments