From 1eef5d00502f9f8c2be707360b96b91bf5f22d42 Mon Sep 17 00:00:00 2001 From: aeluce Date: Mon, 27 Jan 2025 09:06:20 -0600 Subject: [PATCH] fix/add links --- .../MariaDB/amazon-rds-mariadb.md | 10 +++++----- .../MySQL/amazon-rds-mysql.md | 10 +++++----- .../capture-connectors/OracleDB/OracleDB.md | 6 +++--- .../capture-connectors/OracleDB/flashback.md | 8 ++++---- .../PostgreSQL/amazon-rds-postgres.md | 8 ++++---- .../PostgreSQL/postgres-batch.md | 2 ++ .../SQLServer/amazon-rds-sqlserver.md | 9 +-------- .../capture-connectors/SQLServer/sqlserver.md | 4 ++-- .../Connectors/capture-connectors/aircall.md | 2 +- .../capture-connectors/google-drive.md | 2 +- .../capture-connectors/google-pubsub.md | 4 +++- .../capture-connectors/linkedin-pages.md | 18 +++++++++++------- .../netsuite-suiteanalytics.md | 5 ++++- .../Connectors/capture-connectors/pinterest.md | 4 ++-- .../capture-connectors/woocommerce.md | 4 ++-- 15 files changed, 50 insertions(+), 46 deletions(-) diff --git a/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md b/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md index 5084cda8e0..c6568dffac 100644 --- a/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md +++ b/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md @@ -48,18 +48,18 @@ To use this connector, you'll need a MariaDB database setup with the following. 2. Create a RDS parameter group to enable replication in MariaDB. - 1. [Create a parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating). + 1. [Create a parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Creating.html). Create a unique name and description and set the following properties: - **Family**: mariadb10.6 - **Type**: DB Parameter group - 2. [Modify the new parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Modifying) and update the following parameters: + 2. [Modify the new parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Modifying.html) and update the following parameters: - binlog_format: ROW - 3. [Associate the parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Associating) - with the database and set [Backup Retention Period](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.Enabling) to 7 days. + 3. [Associate the parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Associating.html) + with the database and set [Backup Retention Period](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.Enabling.html) to 7 days. Reboot the database to allow the changes to take effect. 3. Switch to your MariaDB client. Run the following commands to create a new user for the capture with appropriate permissions: @@ -84,7 +84,7 @@ This connector supports capturing from a read replica of your database, provided binary logging is enabled on the replica and all other requirements are met. To create a read replica: -1. Follow RDS instructions to [create a read replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create) +1. Follow RDS instructions to [create a read replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.Create.html) of your MariaDB database. 2. [Modify the replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) diff --git a/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md b/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md index e6a70529ff..307acbb582 100644 --- a/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md +++ b/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md @@ -39,18 +39,18 @@ To use this connector, you'll need a MySQL database setup with the following. 2. Create a RDS parameter group to enable replication in MySQL. - 1. [Create a parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating). + 1. [Create a parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Creating.html). Create a unique name and description and set the following properties: - **Family**: mysql8.0 - **Type**: DB Parameter group - 2. [Modify the new parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Modifying) and update the following parameters: + 2. [Modify the new parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Modifying.html) and update the following parameters: - binlog_format: ROW - 3. [Associate the parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Associating) - with the database and set [Backup Retention Period](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.Enabling) to 7 days. + 3. [Associate the parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Associating.html) + with the database and set [Backup Retention Period](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.Enabling.html) to 7 days. Reboot the database to allow the changes to take effect. 3. Switch to your MySQL client. Run the following commands to create a new user for the capture with appropriate permissions: @@ -77,7 +77,7 @@ This connector supports capturing from a read replica of your database, provided binary logging is enabled on the replica and all other requirements are met. To create a read replica: -1. Follow RDS instructions to [create a read replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create) +1. Follow RDS instructions to [create a read replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.Create.html) of your MySQL database. 2. [Modify the replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) diff --git a/site/docs/reference/Connectors/capture-connectors/OracleDB/OracleDB.md b/site/docs/reference/Connectors/capture-connectors/OracleDB/OracleDB.md index 5e8b18670b..bd8fddb873 100644 --- a/site/docs/reference/Connectors/capture-connectors/OracleDB/OracleDB.md +++ b/site/docs/reference/Connectors/capture-connectors/OracleDB/OracleDB.md @@ -1,7 +1,7 @@ # OracleDB This connector captures data from OracleDB into Flow collections using [Oracle Logminer](https://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-logminer-utility.html#GUID-2555A155-01E3-483E-9FC6-2BDC2D8A4093). -It is available for use in the Flow web application. For local development or open-source workflows, `ghcr.io/estuary/source-oracle:dev` provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. +It is available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-oracle:dev`](https://ghcr.io/estuary/source-oracle:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. ## Prerequisites * Oracle 11g or above @@ -153,7 +153,7 @@ To allow secure connections via SSH tunneling: ### Sample -```json +```yaml captures: ${PREFIX}/${CAPTURE_NAME}: endpoint: @@ -172,7 +172,7 @@ captures: sshForwarding: privateKey: -----BEGIN RSA PRIVATE KEY-----\n... sshEndpoint: ssh://ec2-user@19.220.21.33:22 - + bindings: - resource: namespace: ${TABLE_NAMESPACE} diff --git a/site/docs/reference/Connectors/capture-connectors/OracleDB/flashback.md b/site/docs/reference/Connectors/capture-connectors/OracleDB/flashback.md index f698f8811d..f0bcc561aa 100644 --- a/site/docs/reference/Connectors/capture-connectors/OracleDB/flashback.md +++ b/site/docs/reference/Connectors/capture-connectors/OracleDB/flashback.md @@ -2,7 +2,7 @@ # OracleDB (Flashback) This connector captures data from OracleDB into Flow collections using [Oracle Flashback](https://www.oracle.com/database/technologies/flashback/). -It is available for use in the Flow web application. For local development or open-source workflows, ghcr.io/estuary/source-oracle-flashback:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. +It is available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-oracle-flashback:dev`](https://ghcr.io/estuary/source-oracle-flashback:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. ## Prerequisites * Oracle 11g or above @@ -48,7 +48,7 @@ GRANT SELECT ON V$DATABASE TO estuary_flow_user; GRANT SELECT_CATALOG_ROLE TO estuary_flow_user; ``` -1. Your database user should now be ready for use with Estuary Flow. +5. Your database user should now be ready for use with Estuary Flow. ### Recommended Database Configuration @@ -97,7 +97,7 @@ To allow secure connections via SSH tunneling: ### Sample -```json +```yaml captures: ${PREFIX}/${CAPTURE_NAME}: endpoint: @@ -114,7 +114,7 @@ captures: sshForwarding: privateKey: -----BEGIN RSA PRIVATE KEY-----\n... sshEndpoint: ssh://ec2-user@19.220.21.33:22 - + bindings: - resource: name: ${TABLE_NAME} diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md index 7c2c417faa..e9a8e8a358 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md @@ -42,15 +42,15 @@ You'll need a PostgreSQL database setup with the following: 2. Enable logical replication on your RDS PostgreSQL instance. - 1. Create a [parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating). + 1. Create a [parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Creating.html). Create a unique name and description and set the following properties: - **Family**: postgres13 - **Type**: DB Parameter group - 2. [Modify the new parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Modifying) and set `rds.logical_replication=1`. + 2. [Modify the new parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Modifying.html) and set `rds.logical_replication=1`. - 3. [Associate the parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Associating) with the database. + 3. [Associate the parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.Associating.html) with the database. 4. Reboot the database to allow the new parameter group to take effect. @@ -155,7 +155,7 @@ store them separately. TOASTed values can sometimes present a challenge for systems that rely on the PostgreSQL write-ahead log (WAL), like this connector. If a change event occurs on a row that contains a TOASTed value, _but the TOASTed value itself is unchanged_, it is omitted from the WAL. -As a result, the connector emits a row update with the a value omitted, which might cause +As a result, the connector emits a row update with the value omitted, which might cause unexpected results in downstream catalog tasks if adjustments are not made. The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](/concepts/connectors.md#flowctl-discover) diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/postgres-batch.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/postgres-batch.md index c7f7bf33a0..813e3b292b 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/postgres-batch.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/postgres-batch.md @@ -3,6 +3,8 @@ This connector captures data from Postgres into Flow collections by periodically executing queries and translating the results into JSON documents. +For local development or open-source workflows, [`ghcr.io/estuary/source-postgres-batch:dev`](https://ghcr.io/estuary/source-postgres-batch:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. + We recommend using our [PostgreSQL CDC Connector](http://go.estuary.dev/source-postgres) instead if possible. Using CDC provides lower latency data capture, delete and update events, and usually has a smaller impact on the source database. diff --git a/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md b/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md index 9da364bb04..2d642152c8 100644 --- a/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md +++ b/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md @@ -31,13 +31,6 @@ To capture change events from SQL Server tables using this connector, you need: - Access to the change tables created as part of the SQL Server CDC process. - `SELECT`, `INSERT`, and `UPDATE` permissions on the watermarks table -To meet these requirements, follow the steps for your hosting type. - -- [Self-hosted SQL Server](#setup-self-hosted-sql-server) -- [Azure SQL Database](#setup-azure-sql-database) -- [Amazon RDS for SQL Server](#setup-amazon-rds-for-sql-server) -- [Google Cloud SQL for SQL Server](#setup-google-cloud-sql-for-sql-server) - ## Setup 1. Allow connections between the database and Estuary Flow. There are two ways to do this: by granting direct access to Flow's IP or by creating an SSH tunnel. @@ -45,7 +38,7 @@ To meet these requirements, follow the steps for your hosting type. 1. To allow direct access: - [Modify the database](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html), setting **Public accessibility** to **Yes**. - - Edit the VPC security group associated with your database, or create a new VPC security group and associate it with the database as described in [the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create).Create a new inbound rule and a new outbound rule that allow all traffic from the [Estuary Flow IP addresses](/reference/allow-ip-addresses). + - Edit the VPC security group associated with your database, or create a new VPC security group and associate it with the database as described in [the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create). Create a new inbound rule and a new outbound rule that allow all traffic from the [Estuary Flow IP addresses](/reference/allow-ip-addresses). 2. To allow secure connections via SSH tunneling: - Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) diff --git a/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md b/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md index 83a1b56d6a..fb6a30b7a8 100644 --- a/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md +++ b/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md @@ -36,7 +36,7 @@ To capture change events from SQL Server tables using this connector, you need: - A user role with: - `SELECT` permissions on the CDC schema and the schemas that contain tables to be captured. - Access to the change tables created as part of the SQL Server CDC process. - - `SELECT`, `INSERT`, and `UPDATE` permissions on the watermarks table + - `SELECT`, `INSERT`, and `UPDATE` permissions on the watermarks table. ## Setup @@ -44,7 +44,7 @@ To meet these requirements, follow the steps for your hosting type. - [Self-hosted SQL Server](#self-hosted-sql-server) - [Azure SQL Database](#azure-sql-database) -- [Amazon RDS for SQL Server](./amazon-rds-sqlserver/)) +- [Amazon RDS for SQL Server](./amazon-rds-sqlserver/) - [Google Cloud SQL for SQL Server](./google-cloud-sql-sqlserver/) ### Self-hosted SQL Server diff --git a/site/docs/reference/Connectors/capture-connectors/aircall.md b/site/docs/reference/Connectors/capture-connectors/aircall.md index 238b739167..dfa450d2ac 100644 --- a/site/docs/reference/Connectors/capture-connectors/aircall.md +++ b/site/docs/reference/Connectors/capture-connectors/aircall.md @@ -3,7 +3,7 @@ This connector captures data from Aircall into Flow collections. -It is available for use in the Flow web application. For local development or open-source workflows, ghcr.io/estuary/source-aircall:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. +It is available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-aircall:dev`](https://ghcr.io/estuary/source-aircall:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. ## Prerequisites To set up the Aircall connector, you need the following prerequisite: diff --git a/site/docs/reference/Connectors/capture-connectors/google-drive.md b/site/docs/reference/Connectors/capture-connectors/google-drive.md index 670b0df190..8717cccd85 100644 --- a/site/docs/reference/Connectors/capture-connectors/google-drive.md +++ b/site/docs/reference/Connectors/capture-connectors/google-drive.md @@ -2,7 +2,7 @@ This connector lets you capture data from your Google Drive account into Flow collections. -[ghcr.io/estuary/source-google-drive:dev](https://ghcr.io/estuary/source-google-drive:dev) provides the latest connector image. For access to previous image versions, follow the link in your browser. +[`ghcr.io/estuary/source-google-drive:dev`](https://ghcr.io/estuary/source-google-drive:dev) provides the latest connector image. For access to previous image versions, follow the link in your browser. ## Prerequisites diff --git a/site/docs/reference/Connectors/capture-connectors/google-pubsub.md b/site/docs/reference/Connectors/capture-connectors/google-pubsub.md index 8fa6a3fe41..d195c34f5b 100644 --- a/site/docs/reference/Connectors/capture-connectors/google-pubsub.md +++ b/site/docs/reference/Connectors/capture-connectors/google-pubsub.md @@ -4,11 +4,13 @@ This connector captures messages in JSON format into Flow collections from Google Cloud Pub/Sub topics. -During setup, this connect will discover all topics it has access to. Each +During setup, this connector will discover all topics it has access to. Each [capture binding](../../../concepts/README.md#resources-and-bindings) that is enabled for a topic will automatically create a new subscription, and the connector will read messages from that subscription. +[`ghcr.io/estuary/source-google-pubsub:dev`](https://ghcr.io/estuary/source-google-pubsub:dev) provides the latest connector image. You can also follow the link in your browser to see past image versions. + ## Prerequisites To use this connector, you will need the following prerequisites: diff --git a/site/docs/reference/Connectors/capture-connectors/linkedin-pages.md b/site/docs/reference/Connectors/capture-connectors/linkedin-pages.md index 8c46d14ba2..3c14a0c19e 100644 --- a/site/docs/reference/Connectors/capture-connectors/linkedin-pages.md +++ b/site/docs/reference/Connectors/capture-connectors/linkedin-pages.md @@ -4,6 +4,17 @@ This connector captures data from one LinkedIn Page into Flow collections via the [LinkedIn Marketing API](https://learn.microsoft.com/en-us/linkedin/marketing/integrations/marketing-integrations-overview?view=li-lms-2024-03). +[`ghcr.io/estuary/source-linkedin-pages:dev`](https://ghcr.io/estuary/source-linkedin-pages:dev) provides the latest connector image. You can also follow the link in your browser to see past image versions. + +## Supported Streams + +- [Organization Lookup](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/organization-lookup-api?tabs=http#retrieve-organizations) +- [Follower Statistics](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/follower-statistics?tabs=http#retrieve-lifetime-follower-statistics) +- [Share Statistics](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/share-statistics?tabs=http#retrieve-lifetime-share-statistics) +- [Total Follower Count](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/organization-lookup-api?tabs=http#retrieve-organization-follower-count) + +By default, each resource is mapped to a Flow collection through a separate binding. + ## Prerequisites * An existing LinkedIn Account @@ -60,10 +71,3 @@ See [connectors](/concepts/connectors.md#using-connectors) to learn more about u | `/refresh_token` | Refresh Token | The token value generated using the LinkedIn Developers [OAuth Token Tools](https://www.linkedin.com/developers/tools/oauth). | string | Required | | `/access_token` | Access Token | The token value generated using the LinkedIn Developers [OAuth Token Tools](https://www.linkedin.com/developers/tools/oauth). | string | Required | - -## Supported Streams - -- [Organization Lookup](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/organization-lookup-api?tabs=http#retrieve-organizations) -- [Follower Statistics](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/follower-statistics?tabs=http#retrieve-lifetime-follower-statistics) -- [Share Statistics](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/share-statistics?tabs=http#retrieve-lifetime-share-statistics) -- [Total Follower Count](https://docs.microsoft.com/en-us/linkedin/marketing/integrations/community-management/organizations/organization-lookup-api?tabs=http#retrieve-organization-follower-count) diff --git a/site/docs/reference/Connectors/capture-connectors/netsuite-suiteanalytics.md b/site/docs/reference/Connectors/capture-connectors/netsuite-suiteanalytics.md index 778b1f7f45..067914b905 100644 --- a/site/docs/reference/Connectors/capture-connectors/netsuite-suiteanalytics.md +++ b/site/docs/reference/Connectors/capture-connectors/netsuite-suiteanalytics.md @@ -2,6 +2,9 @@ This connector captures data from Oracle NetSuite into Flow collections. It relies on the SuiteAnalytics Connect feature in order to both load large amounts of data quickly, as well as introspect the available tables, their schemas, keys, and cursor fields. +[`ghcr.io/estuary/source-netsuite:dev`](https://ghcr.io/estuary/source-netsuite:dev) provides the +latest connector image. You can also follow the link in your browser to see past image versions. + If you don't have SuiteAnalytics Connect, check out our [SuiteTalk REST](../netsuite-suitetalk) connector. ## Supported data resources @@ -118,7 +121,7 @@ See [connectors](../../../concepts/connectors.md#using-connectors) to learn more | Property | Title | Description | Type | Required/Default | | ----------------------------- | ---------------------- | ------------------------------------------------------------------------------------------------ | ------ | ---------------- | -| `/account | Netsuite Account ID | Netsuite realm/Account ID e.g. 2344535, as for `production` or 2344535_SB1, as for the `sandbox` | string | Required | +| `/account` | Netsuite Account ID | Netsuite realm/Account ID e.g. 2344535, as for `production` or 2344535_SB1, as for `sandbox` | string | Required | | `/role_id` | Role ID | The ID of the role you created. Defaults to 3, which is the ID of the administrator role. | int | 3 | | `/suiteanalytics_data_source` | Data Source | Which NetSuite data source to use. Options are `NetSuite.com`, or `NetSuite2.com` | string | Required | | `/authentication` | Authentication Details | Credentials to access your NetSuite account | object | Required | diff --git a/site/docs/reference/Connectors/capture-connectors/pinterest.md b/site/docs/reference/Connectors/capture-connectors/pinterest.md index bd918aab10..cd21be8439 100644 --- a/site/docs/reference/Connectors/capture-connectors/pinterest.md +++ b/site/docs/reference/Connectors/capture-connectors/pinterest.md @@ -2,7 +2,7 @@ # Pinterest This connector captures data from Pinterest into Flow collections. -It is available for use in the Flow web application. For local development or open-source workflows, ghcr.io/estuary/source-pinterest:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. +It is available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-pinterest:dev`](https://ghcr.io/estuary/source-pinterest:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. ## Prerequisites To set up the Pinterest source connector, you'll need the following prerequisites: @@ -141,5 +141,5 @@ The Pinterest API imposes certain rate limits for the connector. Please take not * Boards streams: 10 calls per second per user per app :::note -For any additional information or troubleshooting, refer to the official Pinterest API documentation. +For any additional information or troubleshooting, refer to the official [Pinterest API documentation](https://developers.pinterest.com/docs/overview/welcome/). ::: diff --git a/site/docs/reference/Connectors/capture-connectors/woocommerce.md b/site/docs/reference/Connectors/capture-connectors/woocommerce.md index 95edcf3ffc..9e8ef085f5 100644 --- a/site/docs/reference/Connectors/capture-connectors/woocommerce.md +++ b/site/docs/reference/Connectors/capture-connectors/woocommerce.md @@ -2,10 +2,10 @@ # WooCommerce This connector captures data from WooCommerce into Flow collections. -It is available for use in the Flow web application. For local development or open-source workflows, ghcr.io/estuary/source-woocommerce:dev provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. +It is available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-woocommerce:dev`](https://ghcr.io/estuary/source-woocommerce:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. ## Prerequisites -To set up the WooCommerce source connector with: you need: +To set up the WooCommerce source connector you need: * WooCommerce 3.5+ * WordPress 4.4+