diff --git a/articles/data-factory/connector-azure-database-for-postgresql.md b/articles/data-factory/connector-azure-database-for-postgresql.md index 2489d30986c18..8e4180fb05ad7 100644 --- a/articles/data-factory/connector-azure-database-for-postgresql.md +++ b/articles/data-factory/connector-azure-database-for-postgresql.md @@ -210,7 +210,10 @@ To copy data to Azure Database for PostgreSQL, the following properties are supp |:--- |:--- |:--- | | type | The type property of the copy activity sink must be set to **AzurePostgreSQLSink**. | Yes | | preCopyScript | Specify a SQL query for the copy activity to execute before you write data into Azure Database for PostgreSQL in each run. You can use this property to clean up the preloaded data. | No | -| writeMethod | The method used to write data into Azure Database for PostgreSQL.
Allowed values are: **CopyCommand** (default, which is more performant), **BulkInsert**. | No | +| writeMethod | The method used to write data into Azure Database for PostgreSQL.
Allowed values are: **CopyCommand** (default, which is more performant), **BulkInsert** and **Upsert**. | No | +| upsertSettings | Specify the group of the settings for write behavior.
Apply when the WriteBehavior option is `Upsert`. | No | +| ***Under `upsertSettings`:*** | | | +| keys | Specify the column names for unique row identification. Either a single key or a series of keys can be used. | No | | writeBatchSize | The number of rows loaded into Azure Database for PostgreSQL per batch.
Allowed value is an integer that represents the number of rows. | No (default is 1,000,000) | | writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.
Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:30:00) | @@ -248,6 +251,51 @@ To copy data to Azure Database for PostgreSQL, the following properties are supp ] ``` +**Example 2: Upsert data** + +```json +"activities":[ + { + "name": "CopyToAzureDatabaseForPostgreSQL", + "type": "Copy", + "inputs": [ + { + "referenceName": "", + "type": "DatasetReference" + } + ], + "outputs": [ + { + "referenceName": "", + "type": "DatasetReference" + } + ], + "typeProperties": { + "source": { + "type": "" + }, + "sink": { + "type": "AzurePostgreSQLSink", + "writeMethod": "Upsert", + "upsertSettings": { + "keys": [ + "" + ] + }, + } + } + } +] +``` + +### Append data + +Appending data is the default behavior of this Azure Database for PostgreSQL sink connector. The service does a bulk insert to write to your table efficiently. You can configure the source and sink accordingly in the copy activity. + +### Upsert data + +Copy activity now supports natively upsert, it will update the data in sink table if key exists and otherwise insert new data. To upsert data it is required to provide a set of columns that either have to be primary key or unique column. + ## Parallel copy from Azure Database for PostgreSQL The Azure Database for PostgreSQL connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.