From 7dc0c6517c155803d6f9123da0d82bbf2a30db10 Mon Sep 17 00:00:00 2001 From: misraved Date: Wed, 18 Jun 2025 20:17:11 +0530 Subject: [PATCH 1/4] Add sample prompt for creating tailpipe table --- docs/develop/using-ai.md | 155 +++++++++++++++++++++++++++++++++++++++ docs/sidebar.json | 1 + 2 files changed, 156 insertions(+) create mode 100644 docs/develop/using-ai.md diff --git a/docs/develop/using-ai.md b/docs/develop/using-ai.md new file mode 100644 index 0000000..11c134f --- /dev/null +++ b/docs/develop/using-ai.md @@ -0,0 +1,155 @@ +--- +title: Using AI +sidebar_label: Using AI +--- + +# Using AI + +Creating new tables for Tailpipe plugins with AI tools and IDEs works remarkably well. At Turbot, we develop plugin tables frequently and use AI for almost every new table we create. We've experimented with various approaches, including detailed prompt engineering, explicit guidelines, IDE rules and instructions, and complex workflows, but found that AI typically produces excellent results even without heavy guidance. + +The key to this success is working within existing plugin repositories and opening the entire repository as a folder or project in your IDE. This gives AI tools access to existing table implementations, documentation examples, code patterns, and naming conventions to generate consistent, high-quality results without extensive prompting. + +If you're looking to use AI to query Tailpipe rather than develop new tables, you can use the [Tailpipe MCP server](https://github.com/turbot/tailpipe-mcp), which provides powerful tools for AI agents to inspect tables and run queries. + +## Getting Started + +While AI often works well with simple requests like "Create a table for [log_type]", here are some prompts we use at Turbot that you may find helpful as starting points. + +### Prerequisites + +1. Open the plugin repository in your IDE (Cursor, VS Code, Windsurf, etc.) to give AI tools access to all existing code and documentation. +2. Ensure you have Tailpipe installed (`brew install turbot/tap/tailpipe` for MacOS or the installation script for Linux/WSL). +3. Set up access credentials for the cloud provider (e.g., AWS credentials). +4. Configure test log sources (e.g., S3 buckets with sample logs, CloudWatch Log Groups). +5. Configure the [Tailpipe MCP server](https://github.com/turbot/tailpipe-mcp) which allows the agent to inspect tables and run queries. + +### Create Table + +First, create the new table and its documentation, using existing tables and docs as reference. + +```md +Your goal is to create a new Tailpipe table and documentation for . + +1. Review existing tables and their documentation in the plugin to understand: + - Table structure patterns + - Source configurations (S3, CloudWatch, etc.) + - Standard field enrichment + - Naming conventions + +2. Implement the table with: + - Proper source metadata configuration + - Row enrichment logic + - Standard fields (tp_id, tp_timestamp, etc.) + - Log-specific fields + - Extractor implementation for parsing logs + +3. Create documentation including: + - Table overview and description + - Configuration examples for each source type + - Example queries for common use cases + - Schema reference +``` + +### Build Plugin + +Next, build the plugin with your changes and verify your new table is properly registered. + +```md +Your goal is to build the plugin and verify that your new table is properly registered and functional. + +1. Build the plugin: + ```sh + make + ``` + +2. Verify the table is registered: + ```sh + tailpipe plugin list + ``` + +3. Check the table schema: + ```sh + tailpipe query + > .inspect aws_ + ``` +``` + +### Configure Log Sources + +```md +Your goal is to configure log sources for to validate your Tailpipe table implementation. + +1. Create configuration in ~/.tailpipe/config/aws.tpc with appropriate source type: + + For S3: + ```hcl + partition "aws_" "s3_logs" { + source "aws_s3_bucket" { + connection = connection.aws.test_account + bucket = "test-logs-bucket" + } + } + ``` + + For CloudWatch: + ```hcl + partition "aws_" "cloudwatch_logs" { + source "aws_cloudwatch_log_group" { + connection = connection.aws.test_account + log_group_name = "/aws/my-log-group" + } + } + ``` + +2. Ensure sample logs are available in your configured source +``` + +### Validate Data Collection + +Next, collect and query the logs to test that the table implementation works correctly. + +```md +Your goal is to thoroughly test your table implementation by validating data collection and querying. + +1. Collect logs from the configured source: + ```sh + tailpipe collect aws_ + ``` + +2. Validate data collection: + - Check collection status and statistics + - Verify log parsing and enrichment + - Confirm partition organization + +3. Test queries: + ```sh + tailpipe query + ``` + - Execute each example query from documentation + - Verify field types and values + - Test filtering and aggregation + - Validate enriched fields + +4. Document test results: + - Collection statistics + - Query results + - Any parsing or enrichment issues +``` + +### Cleanup Test Resources + +After testing is completed, clean up test resources and data. + +```md +Your goal is to clean up all test resources and data created for validation. + +1. Remove test data: + - Delete test log files from S3 + - Clean up test log streams from CloudWatch + - Remove any other test artifacts + +2. Verify cleanup: + - Check source locations are clean + - Confirm all test resources are removed + - Document any persistent resources that should be retained +``` diff --git a/docs/sidebar.json b/docs/sidebar.json index 7f378b3..2357e5a 100644 --- a/docs/sidebar.json +++ b/docs/sidebar.json @@ -69,6 +69,7 @@ "develop/writing-plugins/implementing-tables" ] }, + "develop/using-ai", "develop/plugin-release-checklist", "develop/table-docs-standards", "develop/writing-example-queries", From b9aceba9570d4c028d101fb31dab0bf7679d371b Mon Sep 17 00:00:00 2001 From: misraved Date: Wed, 18 Jun 2025 21:24:58 +0530 Subject: [PATCH 2/4] Fix formatting issues --- docs/develop/using-ai.md | 164 +++++++++++++++++---------------------- 1 file changed, 73 insertions(+), 91 deletions(-) diff --git a/docs/develop/using-ai.md b/docs/develop/using-ai.md index 11c134f..f64b5c3 100644 --- a/docs/develop/using-ai.md +++ b/docs/develop/using-ai.md @@ -27,27 +27,26 @@ While AI often works well with simple requests like "Create a table for [log_typ First, create the new table and its documentation, using existing tables and docs as reference. -```md +``` Your goal is to create a new Tailpipe table and documentation for . 1. Review existing tables and their documentation in the plugin to understand: - - Table structure patterns - - Source configurations (S3, CloudWatch, etc.) - - Standard field enrichment - - Naming conventions - -2. Implement the table with: - - Proper source metadata configuration - - Row enrichment logic - - Standard fields (tp_id, tp_timestamp, etc.) - - Log-specific fields - - Extractor implementation for parsing logs - -3. Create documentation including: - - Table overview and description - - Configuration examples for each source type - - Example queries for common use cases - - Schema reference + - Table structure patterns and naming conventions + - Source configurations (S3, CloudWatch, etc.) + - Standard field enrichment patterns + - Column structures and data types + +2. Create the table implementation with: + - Proper source metadata configuration + - Row enrichment logic for standard and log-specific fields + - Extractor implementation for parsing logs + - Registration in the plugin + +3. Create documentation at `docs/tables/.md` including: + - Table overview and description + - Configuration examples for each source type + - Example queries with expected results + - Complete schema reference ``` ### Build Plugin @@ -57,51 +56,45 @@ Next, build the plugin with your changes and verify your new table is properly r ```md Your goal is to build the plugin and verify that your new table is properly registered and functional. -1. Build the plugin: - ```sh - make - ``` - -2. Verify the table is registered: - ```sh - tailpipe plugin list - ``` - -3. Check the table schema: - ```sh - tailpipe query - > .inspect aws_ - ``` +1. Build the plugin using `make` command. + +2. Verify the table is registered using `tailpipe plugin list`. + +3. Check the table schema and structure using the Tailpipe MCP server + +4. Test basic querying functionality with `tailpipe query "select * from aws_ limit 1"`. ``` -### Configure Log Sources +### Configure Test Sources + +To test the table's functionality, you'll need log sources to query. Configure appropriate sources based on your table's requirements. ```md -Your goal is to configure log sources for to validate your Tailpipe table implementation. - -1. Create configuration in ~/.tailpipe/config/aws.tpc with appropriate source type: - - For S3: - ```hcl - partition "aws_" "s3_logs" { - source "aws_s3_bucket" { - connection = connection.aws.test_account - bucket = "test-logs-bucket" - } - } - ``` - - For CloudWatch: - ```hcl - partition "aws_" "cloudwatch_logs" { - source "aws_cloudwatch_log_group" { - connection = connection.aws.test_account - log_group_name = "/aws/my-log-group" - } - } - ``` - -2. Ensure sample logs are available in your configured source +Your goal is to configure log sources for to validate your table implementation. + +1. Configure appropriate source in ~/.tailpipe/config/aws.tpc: + + For S3 logs: + ``` + partition "aws_" "s3_logs" { + source "aws_s3_bucket" { + connection = connection.aws.test_account + bucket = "test-logs-bucket" + } + } + ``` + + For CloudWatch logs: + ``` + partition "aws_" "cloudwatch_logs" { + source "aws_cloudwatch_log_group" { + connection = connection.aws.test_account + log_group_name = "/aws/my-log-group" + } + } + ``` + +2. Ensure test logs are available in your configured source with sufficient data variety to test all table columns and features. ``` ### Validate Data Collection @@ -111,29 +104,21 @@ Next, collect and query the logs to test that the table implementation works cor ```md Your goal is to thoroughly test your table implementation by validating data collection and querying. -1. Collect logs from the configured source: - ```sh - tailpipe collect aws_ - ``` - -2. Validate data collection: - - Check collection status and statistics - - Verify log parsing and enrichment - - Confirm partition organization - -3. Test queries: - ```sh - tailpipe query - ``` - - Execute each example query from documentation - - Verify field types and values - - Test filtering and aggregation - - Validate enriched fields - -4. Document test results: - - Collection statistics - - Query results - - Any parsing or enrichment issues +1. Collect logs from your configured source: + ``` + tailpipe collect aws_ + ``` + +2. Validate the implementation: + - Execute `select * from aws_` to verify all columns have correct data + - Test each example query from the documentation + - Verify field types and enrichment logic + - Test filtering and aggregation capabilities + +3. Document your test results including: + - Collection statistics + - Query results + - Any parsing or enrichment issues found ``` ### Cleanup Test Resources @@ -141,15 +126,12 @@ Your goal is to thoroughly test your table implementation by validati After testing is completed, clean up test resources and data. ```md -Your goal is to clean up all test resources and data created for validation. +Your goal is to clean up all test resources created for validation. -1. Remove test data: - - Delete test log files from S3 - - Clean up test log streams from CloudWatch - - Remove any other test artifacts +1. Remove all test data from your configured sources: + - Delete test log files from S3 buckets + - Clean up test log streams from CloudWatch + - Remove any other test artifacts created -2. Verify cleanup: - - Check source locations are clean - - Confirm all test resources are removed - - Document any persistent resources that should be retained +2. Verify that all test resources have been successfully removed using the same tools used to create them. ``` From abf0656319f00784c9601a958b6a3a23d35f2e2a Mon Sep 17 00:00:00 2001 From: misraved Date: Wed, 18 Jun 2025 21:44:30 +0530 Subject: [PATCH 3/4] Fix formatting --- docs/develop/using-ai.md | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/docs/develop/using-ai.md b/docs/develop/using-ai.md index f64b5c3..513757e 100644 --- a/docs/develop/using-ai.md +++ b/docs/develop/using-ai.md @@ -27,7 +27,7 @@ While AI often works well with simple requests like "Create a table for [log_typ First, create the new table and its documentation, using existing tables and docs as reference. -``` +```md Your goal is to create a new Tailpipe table and documentation for . 1. Review existing tables and their documentation in the plugin to understand: @@ -104,10 +104,7 @@ Next, collect and query the logs to test that the table implementation works cor ```md Your goal is to thoroughly test your table implementation by validating data collection and querying. -1. Collect logs from your configured source: - ``` - tailpipe collect aws_ - ``` +1. Collect logs from your configured source using `tailpipe collect aws_` command. 2. Validate the implementation: - Execute `select * from aws_` to verify all columns have correct data From 7e029bcc49bb209159a8446529bdfe1b76bd5880 Mon Sep 17 00:00:00 2001 From: misraved Date: Wed, 18 Jun 2025 21:54:37 +0530 Subject: [PATCH 4/4] Fix formatting --- docs/develop/using-ai.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/docs/develop/using-ai.md b/docs/develop/using-ai.md index 513757e..025afaa 100644 --- a/docs/develop/using-ai.md +++ b/docs/develop/using-ai.md @@ -75,24 +75,20 @@ Your goal is to configure log sources for to validate your table impl 1. Configure appropriate source in ~/.tailpipe/config/aws.tpc: For S3 logs: - ``` partition "aws_" "s3_logs" { source "aws_s3_bucket" { connection = connection.aws.test_account bucket = "test-logs-bucket" } } - ``` For CloudWatch logs: - ``` partition "aws_" "cloudwatch_logs" { source "aws_cloudwatch_log_group" { connection = connection.aws.test_account log_group_name = "/aws/my-log-group" } } - ``` 2. Ensure test logs are available in your configured source with sufficient data variety to test all table columns and features. ```