Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: data types of archived notifications table to match notifications table #363

Merged
merged 3 commits into from
Nov 22, 2024

Conversation

kshitij-k-osmosys
Copy link
Contributor

@kshitij-k-osmosys kshitij-k-osmosys commented Nov 21, 2024

API PR Checklist

Pre-requisites

  • I have gone through the Contributing guidelines for Submitting a Pull Request (PR) and ensured that this is not a duplicate PR.
  • I have performed unit testing for the new feature added or updated to ensure the new features added are working as expected.
  • I have added/updated test cases to the test suite as applicable.
  • I have performed preliminary testing using the test suite to ensure that any existing features are not impacted and any new features are working as expected as a whole.
  • I have added/updated the required api docs as applicable.
  • I have added/updated the .env.example file with the required values as applicable.

PR Details

PR details have been updated as per the given format (see below)

  • PR title adheres to the format specified in guidelines (e.g., feat: add admin login endpoint)
  • Description has been added
  • Related changes have been added (optional)
  • Screenshots have been added (optional)
  • Query request and response examples have been added (as applicable, in case added or updated)
  • Documentation changes have been listed (as applicable)
  • Test suite/unit testing output is added (as applicable)
  • Pending actions have been added (optional)
  • Any other additional notes have been added (optional)

Additional Information

  • Appropriate label(s) have been added (ready for review should be added if the PR is ready to be reviewed)
  • Assignee(s) and reviewer(s) have been added (optional)

Note: Reviewer should ensure that the checklist and description have been populated and followed correctly, and the PR should be merged only after resolving all conversations and verifying that CI checks pass.


Description:

Create migration and down file to fix data types of archived notifications table to match notifications table

Related changes:
NA

Screenshots:

  1. Before migration osmox-local.notify_archived_notifications
    image

  2. osmox-local.notify_notifications
    image

  3. After migration osmox-local.notify_archived_notifications
    image

  4. Reverting this migration osmox-local.notify_archived_notifications
    image

Query request and response:
NA

Documentation changes:
NA

Test suite/unit testing output:
NA

Pending actions:
NA

Additional notes:
NA

Summary by CodeRabbit

  • Chores
    • Updated data types for specific columns in the notify_archived_notifications table to enhance data handling.
    • Implemented a rollback strategy for the recent changes to ensure data integrity.

Copy link

coderabbitai bot commented Nov 21, 2024

Walkthrough

This pull request introduces a new migration class, UpdateArchiveTableDatatypes1732200721350, which modifies the data types of specific columns in the notify_archived_notifications table. The up method changes the data and result columns to LONGTEXT and updates the retry_count column to INT(3) with a default of 0. The down method provides a rollback mechanism to revert these changes.

Changes

File Change Summary
apps/api/src/database/migrations/1732200721350-UpdateArchiveTableDatatypes.ts Added migration class UpdateArchiveTableDatatypes1732200721350 implementing MigrationInterface, with methods to alter column types in notify_archived_notifications table.

Possibly related PRs

Suggested reviewers

  • xixas

🐰 In the database, changes we make,
To the notify table, new paths we take.
LONGTEXT for data, a count that is small,
With a rollback in place, we can handle it all!
Hopping through migrations, with joy we sing,
For every update, new data we bring! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between ff91970 and 91c02df.

📒 Files selected for processing (1)
  • apps/api/src/database/migrations/1732200721350-HotfixArchiveTable.ts (1 hunks)
🔇 Additional comments (3)
apps/api/src/database/migrations/1732200721350-HotfixArchiveTable.ts (3)

1-3: LGTM: Class structure follows TypeORM conventions

The migration class is properly structured with correct imports and implements the MigrationInterface as required.


4-19: Verify column types match the notifications table

Let's verify that these changes truly match the notify_notifications table structure.

Additionally, consider these potential issues:

  1. INT(3) for retry_count limits values to 999. Verify if this is sufficient for your use case.
  2. Ensure no data truncation will occur during the TEXT to LONGTEXT conversion.
✅ Verification successful

Let me gather more information about the table structure by examining the migration files.


Based on the migration files, I can now provide a definitive response:

Column changes match the main notifications table structure

The changes in HotfixArchiveTable migration perfectly mirror the existing structure of notify_notifications table:

  1. LONGTEXT for 'data' and 'result' columns - matches migration 1718945071282
  2. INT(3) for retry_count with NOT NULL and DEFAULT 0 - matches migration 1719299457601

The INT(3) size limitation is intentional and consistent with the main table's design, not a concern as it was already validated in the original implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Compare column types between tables
# Expected: Column types should match between tables

# Check column definitions in both tables
mysql_query="SELECT COLUMN_NAME, COLUMN_TYPE, IS_NULLABLE, COLUMN_DEFAULT 
             FROM INFORMATION_SCHEMA.COLUMNS 
             WHERE TABLE_NAME IN ('notify_notifications', 'notify_archived_notifications')
             AND COLUMN_NAME IN ('data', 'result', 'retry_count')
             ORDER BY TABLE_NAME, COLUMN_NAME;"

echo "Checking column definitions..."
echo "$mysql_query"

Length of output: 1091


Script:

#!/bin/bash
# Search for migration files related to notify_notifications table
fd ".*\.ts$" apps/api/src/database/migrations | xargs rg -l "notify_notifications" 

# Get content of these files to understand the table structure
echo "=== Content of migration files ==="
fd ".*\.ts$" apps/api/src/database/migrations | xargs rg "notify_notifications" -A 5 -B 5

Length of output: 25510


1-37: Additional testing and deployment considerations

Before deploying this migration:

  1. Test the migration with a production-like dataset to ensure no data loss
  2. Have a backup before running the migration
  3. Consider running this during low-traffic periods
  4. Verify application code handles both old and new column types during deployment
✅ Verification successful

Let me verify if there are any existing data constraints or related code that might be affected by this migration.


Let me check the entity definition and the previous migration to understand the complete context.


Migration is safe but requires careful deployment planning

The migration is well-designed and safe to proceed with because:

  1. Entity definition uses json type for both data and result columns, which aligns with the LONGTEXT migration (MySQL's json type is stored as LONGTEXT internally)
  2. Previous migration created these columns as TEXT initially
  3. LONGTEXT upgrade is non-destructive as it only increases the maximum storage capacity
  4. INT(3) for retry_count is safe as it's just a display width change, not a storage limit

Deployment recommendations:

  • Schedule during low-traffic window
  • Take database backup before proceeding
  • No application code changes needed as TypeORM entity already handles the types correctly
  • Consider running ANALYZE TABLE after migration if large data exists
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential data issues
# Expected: No rows exceed TEXT limit

echo "Checking for large data values that might be affected..."
mysql_query="SELECT COUNT(*) as large_rows
             FROM notify_archived_notifications 
             WHERE LENGTH(data) > 65535 
             OR LENGTH(result) > 65535;"

echo "$mysql_query"

Length of output: 627


Script:

#!/bin/bash
# Check for any related entity definitions or database schemas
ast-grep --pattern 'class NotifyArchivedNotification {
  $$$
}'

# Look for any code that might be affected by the column type changes
rg -l "notify_archived_notifications" --type ts

# Check for any other migrations related to this table
fd ".*-.*Table\.ts$" -t f

Length of output: 574


Script:

#!/bin/bash
# Check the entity definition
cat apps/api/src/modules/archived-notifications/entities/archived-notification.entity.ts

# Check the previous migration
cat apps/api/src/database/migrations/1730724383210-ArchiveCompletedNotifications.ts

Length of output: 9690

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
apps/api/src/database/migrations/1732200721350-UpdateArchiveTableDatatypes.ts (2)

15-18: Reconsider INT(3) display width change

The change from INT(11) to INT(3) only affects the display width, not the storage capacity. Both can store the same range of values. Unless there's a specific requirement to match the display format, this change might be unnecessary.

Additionally, note that the display width feature is deprecated in newer MySQL versions.

Consider removing the display width specification entirely:

-      ALTER TABLE notify_archived_notifications MODIFY COLUMN retry_count INT(3) NOT NULL DEFAULT 0;
+      ALTER TABLE notify_archived_notifications MODIFY COLUMN retry_count INT NOT NULL DEFAULT 0;

4-19: Consider wrapping migrations in transactions

While TypeORM typically handles transactions, for data type changes it's safer to explicitly ensure transaction safety. This prevents partial migrations if one of the ALTER statements fails.

Consider wrapping the migrations in transactions:

  public async up(queryRunner: QueryRunner): Promise<void> {
+   await queryRunner.startTransaction();
+   try {
      // existing ALTER statements...
+     await queryRunner.commitTransaction();
+   } catch (err) {
+     await queryRunner.rollbackTransaction();
+     throw err;
+   }
  }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 375a9a5 and 9647de9.

📒 Files selected for processing (1)
  • apps/api/src/database/migrations/1732200721350-UpdateArchiveTableDatatypes.ts (1 hunks)
🔇 Additional comments (3)
apps/api/src/database/migrations/1732200721350-UpdateArchiveTableDatatypes.ts (3)

1-3: LGTM! Class declaration follows TypeORM conventions

The implementation correctly imports required TypeORM interfaces and follows the naming convention for migration classes.


21-36: LGTM! Down migration properly reverts changes

The down migration:

  • Correctly reverses changes in the opposite order
  • Maintains consistent default values
  • Properly specifies the original data types

6-8: Verify the necessity of LONGTEXT data type

The migration changes data and result columns from TEXT to LONGTEXT. While this matches the notifications table, it's important to verify:

  1. Are there existing records that require >65KB storage (TEXT limit)?
  2. Are there known cases of data truncation?

LONGTEXT has higher storage overhead, so this change should be justified by actual data requirements.

Also applies to: 11-13

✅ Verification successful

LONGTEXT upgrade is justified based on codebase evidence

The migration to LONGTEXT is warranted based on the following findings:

  1. A previous migration (1718945071282-UpdateDataColumnType.ts) already upgraded the main notifications table (notify_notifications) to use LONGTEXT for the same columns.
  2. The archived notifications table should maintain data type parity with the source table since it stores the same data after completion.
  3. The retry notifications table (1723401906112-AddRetryTable.ts) also uses LONGTEXT, indicating a consistent pattern for storing notification data.

This change ensures consistency across related tables and prevents potential data truncation when archiving notifications, as the system is designed to handle large notification payloads across all its storage locations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for large content in existing records
echo "Checking data column length distribution..."
mysql -e "SELECT 
  CASE 
    WHEN LENGTH(data) > 65535 THEN 'Exceeds TEXT limit'
    WHEN LENGTH(data) > 16383 THEN 'Large content'
    ELSE 'Normal size'
  END as size_category,
  COUNT(*) as count
FROM notify_archived_notifications
GROUP BY size_category;"

echo "Checking result column length distribution..."
mysql -e "SELECT 
  CASE 
    WHEN LENGTH(result) > 65535 THEN 'Exceeds TEXT limit'
    WHEN LENGTH(result) > 16383 THEN 'Large content'
    ELSE 'Normal size'
  END as size_category,
  COUNT(*) as count
FROM notify_archived_notifications
GROUP BY size_category;"

Length of output: 954


Script:

#!/bin/bash
# Let's check the schema and related code to understand the data requirements

# Check for any large payload examples or test data
rg -l "data.*notification" -g "*.json" -g "*.test.*"

# Look for any comments or code indicating data size requirements
rg -i "text|longtext|data.*size|payload" -g "*.ts" -g "*.js"

# Check if there are any related issues or discussions in migration files
fd "migration" --exec cat {} \; | grep -i "text\|longtext\|data.*size"

# Look for notification-related models/schemas
fd "notification" --type f --exec cat {} \; | grep -i "text\|longtext\|column.*data"

Length of output: 15762

@xixas xixas merged commit 78da9f7 into main Nov 22, 2024
6 checks passed
@xixas xixas deleted the fix/archive-notification-table branch November 22, 2024 06:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants