diff --git a/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md b/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
index 8396422624706..24d143fe656f0 100644
--- a/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
+++ b/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
@@ -202,8 +202,8 @@ spec:
- key: agentpool
operator: In
values:
- - acc # this is the name of your confidential computing nodel pool
- - acc_second # this is the name of your confidential computing nodel pool
+ - acc # this is the name of your confidential computing node pool
+ - acc_second # this is the name of your confidential computing node pool
containers:
- name: oe-helloworld
image: mcr.microsoft.com/acc/samples/oe-helloworld:latest
diff --git a/articles/connectors/built-in.md b/articles/connectors/built-in.md
index 5c8542a243740..456de66be43cb 100644
--- a/articles/connectors/built-in.md
+++ b/articles/connectors/built-in.md
@@ -466,7 +466,7 @@ For more information, review the following documentation:
:::column:::
[![SWIFT icon][swift-icon]][swift-doc]
[**SWIFT**][swift-doc]
(*Standard workflow only*)
-
Encode and decode Society for Worldwide Interbank Financial Telecommuncation (SIWFT) transactions in flat-file XML message format.
+
Encode and decode Society for Worldwide Interbank Financial Telecommunication (SWIFT) transactions in flat-file XML message format.
:::column-end:::
:::column:::
[![X12 icon][x12-icon]][x12-doc]
diff --git a/articles/container-apps/authentication.md b/articles/container-apps/authentication.md
index 950a870e2e562..39c86a80ce3db 100644
--- a/articles/container-apps/authentication.md
+++ b/articles/container-apps/authentication.md
@@ -160,7 +160,7 @@ The token format varies slightly according to the provider. See the following ta
| `microsoftaccount` | `{"access_token":""}` or `{"authentication_token": ""`| `authentication_token` is preferred over `access_token`. The `expires_in` property is optional.
When requesting the token from Live services, always request the `wl.basic` scope. |
| `google` | `{"id_token":""}` | The `authorization_code` property is optional. Providing an `authorization_code` value adds an access token and a refresh token to the token store. When specified, `authorization_code` can also optionally be accompanied by a `redirect_uri` property. |
| `facebook`| `{"access_token":""}` | Use a valid [user access token](https://developers.facebook.com/docs/facebook-login/access-tokens) from Facebook. |
-| `twitter` | `{"access_token":"", "access_token_secret":""}` | |
+| `twitter` | `{"access_token":"", "access_token_secret":""}` | |
| | | |
If the provider token is validated successfully, the API returns with an `authenticationToken` in the response body, which is your session token.
diff --git a/articles/container-apps/networking.md b/articles/container-apps/networking.md
index 955f859a192b6..948499c4afd64 100644
--- a/articles/container-apps/networking.md
+++ b/articles/container-apps/networking.md
@@ -272,7 +272,7 @@ When you configure a NAT Gateway on your subnet, the NAT Gateway provides a stat
### Public network access (preview)
-The public network access setting determines whether your container apps environment is accesible from the public Internet. Whether you can change this setting after creating your environment depends on the environment's virtual IP configuration. The following table shows valid values for public network access, depending on your environment's virtual IP configuration.
+The public network access setting determines whether your container apps environment is accessible from the public Internet. Whether you can change this setting after creating your environment depends on the environment's virtual IP configuration. The following table shows valid values for public network access, depending on your environment's virtual IP configuration.
| Virtual IP | Supported public network access | Description |
|--|--|--|
diff --git a/articles/container-apps/sessions-tutorial-llamaindex.md b/articles/container-apps/sessions-tutorial-llamaindex.md
index 3e75a57523a93..ef3a553e4b9a2 100644
--- a/articles/container-apps/sessions-tutorial-llamaindex.md
+++ b/articles/container-apps/sessions-tutorial-llamaindex.md
@@ -46,7 +46,7 @@ The following lines of code instantiate a *AzureCodeInterpreterToolSpec* and pro
```python
code_interpreter_tool = AzureCodeInterpreterToolSpec(
- pool_managment_endpoint=pool_management_endpoint,
+ pool_management_endpoint=pool_management_endpoint,
)
agent = ReActAgent.from_tools(code_interpreter_tool.to_tool_list(), llm=llm, verbose=True)
```
diff --git a/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md b/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md
index 9f7ff7abd4bf4..2caee3818238b 100644
--- a/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md
+++ b/articles/cost-management-billing/costs/ingest-azure-usage-at-scale.md
@@ -114,7 +114,7 @@ Here are some of the characteristics of the service-side sync transfer used with
- The transfer creates checkpoints during its progress and exposes a _TransferCheckpoint_ object. The object represents the latest checkpoint via the _TransferContext_ object. If the _TransferCheckpoint_ is saved before a transfer is cancelled/aborted, the transfer can be resumed from the checkpoint for up to seven days. The transfer can be resumed from any checkpoint, not just the latest.
- If the transfer client process is killed and restarted without implementing the checkpoint feature:
- Before any blob transfers complete, the transfer restarts.
- - After some of the blobs complete, the transfer restarts for only the incompleted blobs.
+ - After some of the blobs complete, the transfer restarts for only the incomplete blobs.
- Pausing the client execution pauses the transfers.
- The blob transfer feature abstracts the client from transient failures. For instance, storage account throttling doesn't normally cause a transfer to fail but slows the transfer.
- Service-side transfers have low client resource usage for CPU and memory, some network bandwidth, and connections.
diff --git a/articles/cost-management-billing/costs/quick-create-budget-bicep.md b/articles/cost-management-billing/costs/quick-create-budget-bicep.md
index c8c499a8ddfbb..daf76f317ac4e 100644
--- a/articles/cost-management-billing/costs/quick-create-budget-bicep.md
+++ b/articles/cost-management-billing/costs/quick-create-budget-bicep.md
@@ -91,7 +91,7 @@ One Azure resource is defined in the Bicep file:
You need to enter the following parameters:
- - **startDate**: Replace **\** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the timegrain period.
+ - **startDate**: Replace **\** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the time grain period.
- **endDate**: Replace **\** with the end date in YYYY-MM-DD format. If not provided, it defaults to ten years from the start date.
- **contactEmails**: First create a variable that holds your emails and then pass that variable. Replace the sample emails with the email addresses to send the budget notification to when the threshold is exceeded.
@@ -137,7 +137,7 @@ One Azure resource is defined in the Bicep file:
You need to enter the following parameters:
- - **startDate**: Replace **\** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the timegrain period.
+ - **startDate**: Replace **\** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the time grain period.
- **endDate**: Replace **\** with the end date in YYYY-MM-DD format. If not provided, it defaults to ten years from the start date.
- **contactEmails**: First create a variable that holds your emails and then pass that variable. Replace the sample emails with the email addresses to send the budget notification to when the threshold is exceeded.
- **resourceGroupFilterValues** First create a variable that holds your resource group filter values and then pass that variable. Replace the sample filter values with the set of values for your resource group filter.
@@ -189,7 +189,7 @@ One Azure resource is defined in the Bicep file:
You need to enter the following parameters:
- - **startDate**: Replace **\** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the timegrain period.
+ - **startDate**: Replace **\** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the time grain period.
- **endDate**: Replace **\** with the end date in YYYY-MM-DD format. If not provided, it defaults to ten years from the start date.
- **contactEmails**: First create a variable that holds your emails and then pass that variable. Replace the sample emails with the email addresses to send the budget notification to when the threshold is exceeded.
- **contactGroups**: First create a variable that holds your contact groups and then pass that variable. Replace the sample contact groups with the list of action groups to send the budget notification to when the threshold is exceeded. You must pass the resource ID of the action group, which you can get with [az monitor action-group show](/cli/azure/monitor/action-group#az-monitor-action-group-show) or [Get-AzActionGroup](/powershell/module/az.monitor/get-azactiongroup).
diff --git a/articles/cost-management-billing/troubleshoot-billing/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md b/articles/cost-management-billing/troubleshoot-billing/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
index 7ab5bde559c0b..5e9359a82af2a 100644
--- a/articles/cost-management-billing/troubleshoot-billing/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
+++ b/articles/cost-management-billing/troubleshoot-billing/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
@@ -46,7 +46,7 @@ In this section, you create a pivot table where you can troubleshoot overall gen
1. In the PivotTable Fields area, drag **Meter Category** and **Product** to the **Rows** section. Put **Product** below **Meter Category**.
:::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/rows-section.png" alt-text="Screenshot showing Meter Category and Product in Rows." lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/rows-section.png" :::
-1. Next, add the **costInBillingCurrenty** column to the **Values** section. You can also use the **Quantity** column instead to get information about consumption units and transactions. For example, GB and Hours. Or, transactions instead of cost in different currencies like USD, EUR, and INR.
+1. Next, add the **costInBillingCurrency** column to the **Values** section. You can also use the **Quantity** column instead to get information about consumption units and transactions. For example, GB and Hours. Or, transactions instead of cost in different currencies like USD, EUR, and INR.
:::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" alt-text="Screenshot showing fields added to the pivot table." lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/add-pivot-table-fields.png" :::
1. Now you have a dashboard for generalized consumption investigation. You can filter for a specific service using the filtering options in the pivot table.
:::image type="content" source="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" alt-text="Screenshot showing the pivot table filter option for a row label." lightbox="./media/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables/pivot-table-filter-option-row-label.png" :::
diff --git a/articles/data-factory/azure-ssis-integration-runtime-standard-virtual-network-injection.md b/articles/data-factory/azure-ssis-integration-runtime-standard-virtual-network-injection.md
index d3b2e26175cfc..7af746346e320 100644
--- a/articles/data-factory/azure-ssis-integration-runtime-standard-virtual-network-injection.md
+++ b/articles/data-factory/azure-ssis-integration-runtime-standard-virtual-network-injection.md
@@ -13,7 +13,7 @@ ms.custom: devx-track-azurepowershell
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-When using SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) or Synpase Pipelines, there are two methods for you to join your Azure-SSIS integration runtime (IR) to a virtual network: standard and express. If you use the standard method, you need to configure your virtual network to meet these requirements:
+When using SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) or Synapse Pipelines, there are two methods for you to join your Azure-SSIS integration runtime (IR) to a virtual network: standard and express. If you use the standard method, you need to configure your virtual network to meet these requirements:
- Make sure that *Microsoft.Batch* is a registered resource provider in Azure subscription that has the virtual network for your Azure-SSIS IR to join. For detailed instructions, see the [Register Azure Batch as a resource provider](azure-ssis-integration-runtime-virtual-network-configuration.md#registerbatch) section.
diff --git a/articles/data-factory/connector-troubleshoot-synapse-sql.md b/articles/data-factory/connector-troubleshoot-synapse-sql.md
index 38c62bc1101cf..e6525206f3f55 100644
--- a/articles/data-factory/connector-troubleshoot-synapse-sql.md
+++ b/articles/data-factory/connector-troubleshoot-synapse-sql.md
@@ -285,7 +285,7 @@ This article provides suggestions to troubleshoot common problems with the Azure
## Error code: SqlDeniedPublicAccess
-- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server, 1. If you persist public network access disabled, please use Managed Vritual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; 2. Otherwise you can enable public network access, set "Public network access" option to "Selected networks" on Azure SQL Networking setting.`
+- **Message**: `Cannot connect to SQL Database: '%server;', Database: '%database;', Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server, 1. If you persist public network access disabled, please use Managed Virtual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; 2. Otherwise you can enable public network access, set "Public network access" option to "Selected networks" on Azure SQL Networking setting.`
- **Causes**: Azure SQL Database is set to deny public network access. This requires to use managed virtual network and create private endpoint to access.
diff --git a/articles/data-factory/create-self-hosted-integration-runtime.md b/articles/data-factory/create-self-hosted-integration-runtime.md
index 564142c01f0e6..a2ef6683739f2 100644
--- a/articles/data-factory/create-self-hosted-integration-runtime.md
+++ b/articles/data-factory/create-self-hosted-integration-runtime.md
@@ -378,7 +378,7 @@ You also need to make sure that Microsoft Azure is in your company's allowlist.
### Configure proxy server settings when using a private endpoint
-If your company's network architure involves the use of private endpoints and for security reasons, and your company's policy does not allow a direct internet connection from the VM hosting the Self Hosted Integration Runtime to the Azure Data Factory service URL, then you will need to allow bypass the ADF Service URL for full connectivity. The following procedure provides instructions for updating the diahost.exe.config file. You should also repeat these steps for the diawp.exe.config file.
+If your company's network architecture involves the use of private endpoints and for security reasons, and your company's policy does not allow a direct internet connection from the VM hosting the Self Hosted Integration Runtime to the Azure Data Factory service URL, then you will need to allow bypass the ADF Service URL for full connectivity. The following procedure provides instructions for updating the diahost.exe.config file. You should also repeat these steps for the diawp.exe.config file.
1. In File Explorer, make a safe copy of _C:\Program Files\Microsoft Integration Runtime\5.0\Shared\diahost.exe.config_ as a backup of the original file.
1. Open Notepad running as administrator.
@@ -516,7 +516,7 @@ Follow these steps:
# - signed in user needs rights to modify NSG (e.g. Network contributor) and to read status of the SHIR (e.g. reader), plus reader on the subscription
param (
- [string]$synapseRresourceGroupName = "synapse_test",
+ [string]$synapseResourceGroupName = "synapse_test",
[string]$nsgResourceGroupName = "adf_shir_rg",
[string]$synapseWorkspaceName = "synapse-test-jugi2",
[string]$integrationRuntimeName = "IntegrationRuntime2",
diff --git a/articles/data-factory/data-flow-aggregate.md b/articles/data-factory/data-flow-aggregate.md
index 5e393543fc174..2f54af1f6d872 100644
--- a/articles/data-factory/data-flow-aggregate.md
+++ b/articles/data-factory/data-flow-aggregate.md
@@ -1,7 +1,7 @@
---
title: Aggregate transformation in mapping data flow
titleSuffix: Azure Data Factory & Azure Synapse
-description: Learn how to aggregate data at scale in Azure Data Factory and Synapse Analyatics with the mapping data flow Aggregate transformation.
+description: Learn how to aggregate data at scale in Azure Data Factory and Synapse Analytics with the mapping data flow Aggregate transformation.
author: kromerm
ms.author: makromer
ms.reviewer: daperlov
diff --git a/articles/data-factory/data-flow-assert.md b/articles/data-factory/data-flow-assert.md
index 6d8486d6c7d29..75c1468e59733 100644
--- a/articles/data-factory/data-flow-assert.md
+++ b/articles/data-factory/data-flow-assert.md
@@ -101,7 +101,7 @@ source1, source2 assert(expectExists(AddressLine1 == AddressLine1, false, 'nonUS
```
source1, source2 assert(expectTrue(CountryRegion == 'United States', false, 'nonUS', null, 'only valid for U.S. addresses'),
expectExists(source1@AddressID == source2@AddressID, false, 'assertExist', StateProvince == 'Washington', toString(source1@AddressID) + ' already exists in Washington'),
- expectUnique(source1@AddressID, false, 'uniqueness', null, toString(source1@AddressID) + ' is not unqiue')) ~> Assert1
+ expectUnique(source1@AddressID, false, 'uniqueness', null, toString(source1@AddressID) + ' is not unique')) ~> Assert1
```
diff --git a/articles/data-factory/rest-apis-for-airflow-integrated-runtime.md b/articles/data-factory/rest-apis-for-airflow-integrated-runtime.md
index e67475c29e7e8..90dcab62d87c9 100644
--- a/articles/data-factory/rest-apis-for-airflow-integrated-runtime.md
+++ b/articles/data-factory/rest-apis-for-airflow-integrated-runtime.md
@@ -192,7 +192,7 @@ Response body:
"airflowEntityReferences": [],
"packageProviderPath": "plugins",
"enableAADIntegration": true,
- "enableTriggerers": false
+ "enableTriggers": false
}
},
"state": "Initial"
diff --git a/articles/data-factory/sap-change-data-capture-advanced-topics.md b/articles/data-factory/sap-change-data-capture-advanced-topics.md
index d3f8388839405..c792f7e481a02 100644
--- a/articles/data-factory/sap-change-data-capture-advanced-topics.md
+++ b/articles/data-factory/sap-change-data-capture-advanced-topics.md
@@ -15,18 +15,18 @@ ms.author: ulrichchrist
Learn about advanced topics for the SAP CDC connector like metadata driven data integration, debugging, and more.
-## Parametrizing an SAP CDC mapping data flow
+## Parameterizing an SAP CDC mapping data flow
One of the key strengths of pipelines and mapping data flows in Azure Data Factory and Azure Synapse Analytics is the support for metadata driven data integration. With this feature, it's possible to design a single (or few) parametrized pipeline that can be used to handle integration of potentially hundreds or even thousands of sources.
The SAP CDC connector has been designed with this principle in mind: all relevant properties, whether it's the source object, run mode, key columns, etc., can be provided via parameters to maximize flexibility and reuse potential of SAP CDC mapping data flows.
-To understand the basic concepts of parametrizing mapping data flows, read [Parameterizing mapping data flows](parameters-data-flow.md).
+To understand the basic concepts of parameterizing mapping data flows, read [Parameterizing mapping data flows](parameters-data-flow.md).
In the template gallery of Azure Data Factory and Azure Synapse Analytics, you find a [template pipeline and data flow](solution-template-replicate-multiple-objects-sap-cdc.md) which shows how to parametrize SAP CDC data ingestion.
-### Parametrizing source and run mode
+### Parameterizing source and run mode
-Mapping data flows don't necessarily require a Dataset artifact: both source and sink transformations offer a **Source type** (or **Sink type**) **Inline**. In this case, all source properties otherwise defined in an ADF dataset can be configured in the **Source options** of the source transformation (or **Settings** tab of the sink transformation). Using an inline dataset provides better overview and simplifies parametrizing a mapping data flow since the complete source (or sink) configuration is maintained in a one place.
+Mapping data flows don't necessarily require a Dataset artifact: both source and sink transformations offer a **Source type** (or **Sink type**) **Inline**. In this case, all source properties otherwise defined in an ADF dataset can be configured in the **Source options** of the source transformation (or **Settings** tab of the sink transformation). Using an inline dataset provides better overview and simplifies parameterizing a mapping data flow since the complete source (or sink) configuration is maintained in a one place.
For SAP CDC, the properties that are most commonly set via parameters are found in the tabs **Source options** and **Optimize**.
When **Source type** is **Inline**, the following properties can be parametrized in **Source options**.
@@ -45,7 +45,7 @@ When **Source type** is **Inline**, the following properties can be parametrized
- **incrementalLoad** for **Incremental changes only**, which initiates a change data capture process without extracting a current full snapshot.
- **Key columns**: key columns are provided as an array of (double-quoted) strings. For example, when working with SAP table **VBAP** (sales order items), the key definition would have to be \["VBELN", "POSNR"\] (or \["MANDT","VBELN","POSNR"\] in case the client field is taken into account as well).
-### Parametrizing the filter conditions for source partitioning
+### Parameterizing the filter conditions for source partitioning
In the **Optimize** tab, a source partitioning scheme (see [optimizing performance for full or initial loads](connector-sap-change-data-capture.md#optimizing-performance-of-full-or-initial-loads-with-source-partitioning)) can be defined via parameters. Typically, two steps are required:
1. Define the source partitioning scheme.
@@ -124,7 +124,7 @@ Finally, in the **optimize** tab of the source transformation in your mapping da
:::image type="content" source="media/sap-change-data-capture-solution/sap-change-data-capture-advanced-ingest-partition-parameter.png" alt-text="Screenshot showing how to use the partitioning parameter in the optimize tab of the source transformation.":::
-### Parametrizing the Checkpoint Key
+### Parameterizing the Checkpoint Key
When using a parametrized data flow to extract data from multiple SAP CDC sources, it's important to parametrize the **Checkpoint Key** in the data flow activity of your pipeline. The checkpoint key is used by Azure Data Factory to manage the status of a change data capture process. To avoid that the status of one CDC process overwrites the status of another one, make sure that the checkpoint key values are unique for each parameter set used in a dataflow.
diff --git a/articles/data-factory/update-machine-learning-models.md b/articles/data-factory/update-machine-learning-models.md
index 49bf04b4aa836..54a6b7c52d10a 100644
--- a/articles/data-factory/update-machine-learning-models.md
+++ b/articles/data-factory/update-machine-learning-models.md
@@ -207,7 +207,7 @@ The pipeline has two activities: **AzureMLBatchExecution** and **AzureMLUpdateRe
"activities": [
{
"name": "amlBEGetilearner",
- "description": "Use AML BES to get the ileaner file from training web service",
+ "description": "Use AML BES to get the ilearner file from training web service",
"type": "AzureMLBatchExecution",
"linkedServiceName": {
"referenceName": "trainingEndpoint",