diff --git a/source/source-connector/configuration-properties/copy-existing.txt b/source/source-connector/configuration-properties/copy-existing.txt deleted file mode 100644 index fe82c42..0000000 --- a/source/source-connector/configuration-properties/copy-existing.txt +++ /dev/null @@ -1,137 +0,0 @@ -.. _source-configuration-copy-existing: - -======================== -Copy Existing Properties -======================== - -.. contents:: On this page - :local: - :backlinks: none - :depth: 2 - :class: singlecol - -Overview --------- - -.. _source-configuration-copy-existing-description-start: - -.. important:: ``copy.existing*`` Properties are Deprecated - - Starting in Version 1.9 of the {+connector+}, ``copy.existing*`` properties - are deprecated and may be removed in a future release. You should use - ``startup.mode*`` properties to configure the copy existing feature. - To learn about ``startup.mode*`` settings, see - :ref:`source-configuration-startup`. - -Use the following configuration settings to enable the copy existing -feature which converts MongoDB collections into Change Stream events. - -.. _source-configuration-copy-existing-description-end: - -.. seealso:: - - For an example of the copy existing feature, see the - :ref:`` Usage Example. - -.. include:: /includes/source-config-link.rst - -Settings --------- - -.. _source-configuration-copy-existing-table-start: - -.. list-table:: - :header-rows: 1 - :widths: 35 65 - - * - Name - - Description - - * - | **copy.existing** - - | **Type:** boolean - | - | **Description:** - | Whether to enable the copy existing feature which converts all - data in a MongoDB collection to Change Stream events and - publishes them on Kafka topics. If MongoDB changes the source - collection data after the connector starts the copy process, the - connector creates events for the changes after it completes the copy - process. - - .. include:: /includes/copy-existing-admonition.rst - - | **Default**:``false`` - | **Accepted Values**: ``true`` or ``false`` - - * - | **copy.existing.namespace.regex** - - | **Type:** string - | - | **Description:** - | Regular expression the connector uses to match namespaces from - which to copy data. A namespace describes the MongoDB database name - and collection separated by a period (for example, ``databaseName.collectionName``). - - .. example:: - - In the following example, the regular-expression setting matches - collections that start with "page" in the ``stats`` database. - - .. code-block:: none - - copy.existing.namespace.regex=stats\.page.* - - The "\" character in the example above escapes the "." character - that follows it in the regular expression. For more information on - how to build regular expressions, see the Java API documentation on - `Patterns `__. - - | **Default**: ``""`` - | **Accepted Values**: A valid regular expression - - * - | **copy.existing.pipeline** - - | **Type:** string - | - | **Description:** - | An array of :manual:`pipeline operations ` - the connector runs when copying existing data. You can use this - setting to filter the source collection and improve the use of - indexes in the copying process. - - .. example:: - - The following example shows how you can use the :manual:`$match ` - aggregation operator to instruct the connector to copy only - documents that contain a ``closed`` field with a value of ``false``. - - .. code-block:: none - - copy.existing.pipeline=[ { "$match": { "closed": "false" } } ] - - | **Default**: ``[]`` - | **Accepted Values**: Valid aggregation pipeline stages - - * - | **copy.existing.max.threads** - - | **Type:** int - | - | **Description:** - | The maximum number of threads the connector can use to copy data. - | **Default**: number of processors available in the environment - | **Accepted Values**: An integer - - * - | **copy.existing.queue.size** - - | **Type:** int - | - | **Description:** - | The size of the queue the connector can use when copying data. - | **Default**: ``16000`` - | **Accepted Values**: An integer - - * - | **copy.existing.allow.disk.use** - - | **Type:** boolean - | - | **Description:** - | When set to ``true``, the connector uses temporary disk storage - for the copy existing aggregation. - | **Default**: ``true`` - -.. _source-configuration-copy-existing-table-end: \ No newline at end of file diff --git a/source/whats-new.txt b/source/whats-new.txt index f866743..da67036 100644 --- a/source/whats-new.txt +++ b/source/whats-new.txt @@ -19,6 +19,7 @@ What's New Learn what's new by version: +* :ref:`Version 1.13 ` * :ref:`Version 1.12 ` * :ref:`Version 1.11.2 ` * :ref:`Version 1.11.1 ` @@ -39,6 +40,22 @@ Learn what's new by version: * :ref:`Version 1.1 ` * :ref:`Version 1.0 ` +.. _kafka-connector-whats-new-1.13: + +What's New in 1.13 +------------------ + +- Added a custom authentication provider interface for Source and Sink + Connectors. This feature enables you to write and use a custom implementation + class in your connector. + +.. TODO add link To learn more, see the :ref:`` guide. + +- Fixed an issue that occurred when validating configuration for Source + and Sink Connectors if the configuration contained secrets and used + the ``Provider`` framework. To learn more about this fix, see the + `KAFKA-414 `__ JIRA issue. + .. _kafka-connector-whats-new-1.12: What's New in 1.12 @@ -240,7 +257,7 @@ Source Connector - Added support for the :manual:`allow disk use ` field of the {+query-api+} in the copy existing aggregation with the - ``copy.existing.allow.disk.use`` :ref:`configuration property ` + ``copy.existing.allow.disk.use`` configuration property - Added support for `Avro schema namespaces `__ in the ``output.schema.value`` and ``output.schema.key`` :ref:`configuration properties `