-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(asset): implement file system repository and zip decompressor #66
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis pull request introduces a comprehensive asset management system with multiple components across various packages. The changes include new interfaces, repositories, entities, and use cases for handling assets, groups, and related operations. Key additions involve GraphQL schema definitions, decompression and compression utilities, event publishing mechanisms, and robust validation frameworks. The implementation supports creating, updating, deleting, uploading, and managing assets with sophisticated error handling and transaction management. Changes
Sequence DiagramsequenceDiagram
participant Client
participant GraphQL
participant AssetInteractor
participant PersistenceRepo
participant PubSubRepo
Client->>GraphQL: Upload Asset
GraphQL->>AssetInteractor: CreateAsset
AssetInteractor->>PersistenceRepo: Save Asset
PersistenceRepo-->>AssetInteractor: Asset Saved
AssetInteractor->>PubSubRepo: Publish Asset Created Event
PubSubRepo-->>AssetInteractor: Event Published
AssetInteractor-->>GraphQL: Return Asset
GraphQL-->>Client: Asset Upload Confirmed
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (15)
asset/utils.go (2)
3-5
: Consider adding a module-level comment for clarity.Although the import statement is straightforward, adding a small comment at the top of this file (or at least above the import statement) about the overall purpose of this utility file would help guide future contributors.
7-9
: Add a docstring for the function.Even though this function is short and unexported, adding a concise docstring (e.g.,
// generateID returns a new UUID string.
) improves readability and clarifies its role.asset/gcs/repository.go (2)
36-52
: Create Method
This method successfully prevents overwriting existing objects and creates a zero-length object. If this design is intentional (e.g., creating a placeholder or metadata-only object), then it’s fine. Otherwise, consider writing an actual payload here or renaming the method to reflect that it only creates metadata.
107-131
: List Method
Good approach iterating through the bucket. Consider adding a check for skipping directories if the system uses virtual folder structures in GCS.asset/decompress/zip.go (1)
62-66
: processZipFile Method
Stub for iterating through ZIP contents and creating new assets. Implementation needed.Would you like help adding the file extraction logic to finalize this feature?
asset/service.go (5)
9-11
: Use a more descriptive struct documentation.You might consider adding doc comments for the Service struct to clarify its responsibilities and intended usage. This helps maintainers quickly understand the Service’s role in the asset package.
17-33
: Validate Asset Input for Improved Robustness.Inside the
Create
method, consider adding validation checks for required fields (e.g.,input.Name
,input.ContentType
). If these fields are empty or invalid, we may want to return an error early rather than storing potentially useless data.
28-30
: Add context to repository error messages.When returning the repository error, wrap or annotate it with context, e.g.,
fmt.Errorf("failed to create asset: %w", err)
. This ensures clarity when troubleshooting.
35-59
: Guard Against Status Overwrite.The
Update
method setsasset.Status = input.Status
unconditionally, potentially overriding a prior status. Consider validating that the current status is compatible with the new status, or at least logging when a status transitions back to a "lesser" state.
77-79
: Clarify the Behavior ofGetUploadURL
.Although returning a direct URL is not supported by the filesystem implementation, consider adding more documentation or a usage hint on how callers should handle this situation (e.g., graceful fallback or an alternative upload mechanism).
asset/fs_repository.go (4)
25-43
: Populate Asset Metadata More Explicitly.While returning a minimal
Asset
struct with file info is fine, consider reading any previously stored metadata if you plan to support additional attributes (like custom metadata), ensuring consistency across the system.
48-57
: Defer Checking and Logging File Closes.You correctly defer
file.Close()
. If a file fails to close for some reason (e.g., I/O error), we lose that info. Consider logging or returning it (where relevant).
96-100
: Improve Clarity of Unsupported Behavior.It may help to mention in the error message that “direct upload URLs are only supported in certain repository implementations.” This clarifies the reason for the error to consumers.
141-166
: Consider Caching Asset Metadata.Reading directory contents for each call might be slow for large directories. If performance becomes an issue, you could introduce caching or indexing capabilities while invalidating stale entries upon updates/deletes.
go.mod (1)
51-67
: Check for Unused Indirect Dependencies.A variety of new indirect dependencies have been added. If any are not actually needed, consider removing them to keep the dependency tree small and reduce the potential attack surface.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (10)
.gitignore
(1 hunks)asset/decompress/zip.go
(1 hunks)asset/fs_repository.go
(1 hunks)asset/gcs/repository.go
(1 hunks)asset/pubsub/pubsub.go
(1 hunks)asset/repository.go
(1 hunks)asset/service.go
(1 hunks)asset/types.go
(1 hunks)asset/utils.go
(1 hunks)go.mod
(4 hunks)
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🧰 Additional context used
🪛 golangci-lint (1.62.2)
asset/decompress/zip.go
47-47: Error return value of d.assetService.Update
is not checked
(errcheck)
53-53: Error return value of d.assetService.Update
is not checked
(errcheck)
🪛 GitHub Check: CI
asset/decompress/zip.go
[failure] 47-47:
Error return value of d.assetService.Update
is not checked (errcheck)
[failure] 53-53:
Error return value of d.assetService.Update
is not checked (errcheck)
🔇 Additional comments (37)
asset/gcs/repository.go (13)
1-13
: Imports & Package Declaration
No issues found with the initial package and imports.
15-19
: Struct Definition
Struct fields appear standard and concise. Consider documenting the basePath
usage for clarity, if desired.
21-22
: Interface Assertion
This interface assertion correctly enforces compile-time checks for the Repository
interface.
23-34
: NewRepository Constructor
Overall logic looks good. It properly handles client creation failures.
54-56
: getObject Helper
Helper method is straightforward; no issues found.
58-63
: handleNotFound Helper
Good approach for translating a GCS not-found error to a friendlier error.
65-79
: Read Method
Reads object attributes and gracefully handles any not-found scenario. Looks solid.
81-94
: Update Method
Straightforward update logic. Properly returns an error on failure.
96-105
: Delete Method
Appropriate handling of non-existing assets by returning nil is good.
133-146
: Upload Method
Logic is clear; ensure the caller sets Metadata
if needed before or after this write if there are more asset details to store.
148-158
: FetchFile Method
Properly returns io.ReadCloser and handles not-found errors. Looks good.
160-171
: GetUploadURL Method
Generates a signed URL for PUT. That approach is standard for GCS.
173-175
: objectPath Helper
Uses path.Join
to build the object path. Straightforward and clear.
asset/repository.go (1)
8-29
: Interfaces Definition
The interfaces Reader
, Writer
, FileOperator
, and Repository
are well-structured and SRP-friendly. They comprehensively cover asset read, write, file operations, and unify them in Repository
.
asset/pubsub/pubsub.go (7)
1-2
: Package Declaration
A new pubsub
package for event handling is clear.
3-6
: Imports
Dependencies make sense for this event-based approach.
8-11
: AssetEvent Struct
Simple event payload with a type and asset ID. Straightforward design.
13-15
: Publisher Interface
Abstracting out a publisher interface is a good design for pluggable backends.
17-20
: AssetPubSub Struct
Holds a publisher and topic. Clean separation of concerns.
22-27
: NewAssetPubSub Constructor
Straightforward constructor. No issues.
29-35
: PublishAssetEvent Method
Publishes the event with the embedded publisher. Good approach.
asset/types.go (5)
1-11
: Package and ID Definition
Defines ID
type and its String()
method. No issues found.
13-20
: Status Type and Constants
Represents asset statuses clearly. Sufficient coverage of typical states.
22-33
: Asset Struct
Holds essential asset details. Good choice of fields.
35-39
: CreateAssetInput
Minimal set of fields for asset creation. Looks good.
41-47
: UpdateAssetInput
Supports partial updates for name, URL, content type, plus required status and error fields. This is helpful for flexible updates.
asset/decompress/zip.go (5)
1-2
: Package Declaration
Defines the decompress
package. No issues found.
3-10
: Imports
Uses archive/zip
and references asset
package. Logical pairing for decompression tasks.
11-13
: ZipDecompressor Struct
Holds a reference to asset.Service
, enabling direct asset updates. Design is coherent.
15-19
: NewZipDecompressor Constructor
Straightforward constructor returning the ZipDecompressor
.
21-42
: DecompressAsync: Preliminary Setup
• Correctly fetches the ZIP file from the assetService
.
• Updates status to EXTRACTING
.
• Initiates asynchronous processing.
Looks good as a starting point.
asset/service.go (1)
69-71
: Close the File Stream Properly in the Caller.
Although returning an io.ReadCloser
is correct, ensure that any logic calling GetFile
properly defers closing the stream. This is a usage detail, but documenting it would help avoid potential resource leaks.
asset/fs_repository.go (1)
15-23
: Check for Symlink Attacks.
os.MkdirAll(baseDir, 0755)
might be susceptible if the baseDir is user-controlled. While this might be acceptable for trusted environments, consider verifying or sanitizing user input to prevent path traversal or malicious symlinks.
go.mod (4)
32-32
: Validate Updated Dependencies for Compatibility.
Multiple libraries have been updated (e.g., github.com/stretchr/testify
, go.opentelemetry.io/otel
). Ensure that these versions don’t introduce breaking changes or deprecations. Re-run existing tests to confirm compatibility.
Also applies to: 38-39, 42-44
70-71
: Use the Storage Library in a Secure, Efficient Manner.
cloud.google.com/go/storage
is introduced/retained, so ensure that all GCS interactions are properly authenticated and that objects are managed securely, especially if these calls handle sensitive files.
93-93
: Confirm Logging Upgrades.
Newer versions of go-logr
and objx
might contain changes that affect logging or data handling. Confirm that logging statements and object manipulations continue to work as expected.
Also applies to: 119-119
129-132
: Assess Telemetry and gRPC Upgrades.
The OpenTelemetry and gRPC libraries have had significant updates. Check your instrumentation and remote procedure calls to ensure no changes are required for new APIs or configuration patterns.
Also applies to: 134-139, 142-147
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (6)
asset/infrastructure/decompress/zip.go (2)
83-97
: Be aware of potential high memory usage when reading entire file content.Using
io.ReadAll(rc)
loads the entire file into memory. For large files, this can lead to high memory usage. Consider a streaming-based approach if large file sizes are expected.
190-194
: Expanded hidden-file detection might be needed.
isHiddenFile
only checks Unix-style hidden files (dot-prefixed). If this needs to support OS-specific hidden file rules (e.g., Windows), expand the logic accordingly. Otherwise, this is fine as-is.asset/infrastructure/gcs/client.go (2)
137-153
: Check content type on upload if required for your application.You might want to set metadata or validate the asset’s content type here if it is important for your domain logic.
167-178
: Review upload URL security.The upload URL is valid for 15 minutes. Confirm that this expiry meets your security and performance needs, or adjust as required to balance risk and convenience.
asset/repository/persistence_repository.go (1)
15-19
: CRUD methods are consistent and clear.
ThisWriter
interface aligns well with typical domain-driven design, encapsulating asset creation, updating, and deletion. Be sure to handle partial failures or concurrency conflicts gracefully in your concrete implementations.asset/domain/asset.go (1)
22-33
: Comprehensive struct for asset details.
This struct captures essential asset attributes. Consider validating some fields (e.g.,size >= 0
) to avoid unexpected errors at runtime.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
asset/domain/asset.go
(1 hunks)asset/domain/repository.go
(1 hunks)asset/domain/service.go
(1 hunks)asset/infrastructure/decompress/zip.go
(1 hunks)asset/infrastructure/decompress/zip_test.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)asset/repository/decompressor_repository.go
(1 hunks)asset/repository/persistence_repository.go
(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: CI
asset/infrastructure/decompress/zip.go
[error] 39-50: Data race detected in ZipDecompressor.DecompressWithContent between read and write operations
🔇 Additional comments (20)
asset/infrastructure/decompress/zip.go (1)
101-173
: Confirm concurrency safety in CompressWithContent.Although using a mutex around
zipWriter
is helpful, verify that reading and copying each file in separate goroutines won't introduce hidden data races or excessive memory consumption for large archives. Consider bounding concurrency to a fixed worker pool.asset/infrastructure/gcs/client.go (2)
39-50
: Validate bucket existence and access permission.When creating a new GCS client, ensure the provided bucket actually exists and the service account has valid permissions. This helps prevent runtime errors during subsequent operations.
70-84
: Return early when an asset is not found.The
Read
method properly useshandleNotFound
to detect missing assets. This is a good practice that keeps error handling clean.asset/domain/service.go (1)
8-16
: Ensure completeness of the Service interface.The
Service
interface covers the main asset operations. Periodically confirm that new domain use cases do not require additional methods, keeping the interface aligned with business requirements.asset/domain/repository.go (1)
8-25
: Interfaces look consistent and cohesive.The
Reader
,Writer
, andFileOperator
interfaces accurately partition concerns. Implementers can selectively fulfill or combine them, promoting clearer domain-driven design.asset/repository/decompressor_repository.go (1)
8-16
: Confirm usage of Decompressor within your domain flow.
DecompressWithContent
andCompressWithContent
facilitate async file manipulation. Ensure that these methods integrate properly with higher-level domain services (e.g., logging, error handling, concurrency).asset/repository/persistence_repository.go (3)
10-13
: Interfaces well-defined for reading assets.
ThisReader
interface is concise and focuses on essential operations. The separation of concerns between reading and writing assets enhances clarity and maintainability.
21-25
: File operations are well encapsulated.
TheFileOperator
interface is flexible, offering direct upload/download and a potential presigned URL flow. Consider auditing or logging usage ofGetUploadURL
if you require more robust security.
27-31
: Cohesive composition of interfaces.
By embeddingReader
,Writer
, andFileOperator
intoPersistenceRepository
, you unify all repository needs in a single interface. This is a solid approach for complex asset management systems.asset/domain/asset.go (6)
7-11
: ID type and String method are straightforward.
Defining theID
type as a string and providing aString()
method is clear. This design pattern works well for domain IDs while keeping them flexible.
15-20
: Status constants are well-defined but consider future extensibility.
The enumerations cover pending, active, extracting, and error states. If further states (like Queued or Archived) are ever needed, ensure backward compatibility.
35-46
: Constructor sets correct defaults.
UsingNewAsset
to initialize default statuses and timestamps centralizes your object creation. Good practice.
48-59
: Getters follow best practices.
Exposing read-only access to private fields via getters is a robust pattern for preventing outside mutation.
60-65
: UpdateStatus with error tracking.
This approach to updating status and capturing an error message in a single method helps encapsulate the logic. Ensure the caller handles partial states or transitions consistently.
67-78
: Flexible metadata updates.
Selective updates for name, URL, or content type work cleanly. Make sure usage scenarios requiring resetting fields to empty strings are accounted for.asset/infrastructure/decompress/zip_test.go (5)
12-49
: Decompression test coverage is robust.
Testing normal and hidden files ensures that the decompressor properly filters out or processes them. Confirm that hidden file logic is documented if this filter is intended for all use cases.
51-86
: Compression and decompression synergy validated.
Verifying that what’s compressed can be immediately decompressed confirms the round-trip integrity. Excellent coverage for typical usage scenarios.
88-121
: Context cancellation handling tested thoroughly.
Cancelling both compression and decompression flows ensures that your implementation is resilient to abrupt termination, especially in streaming contexts.
123-131
: Invalid zip content test is crucial.
Ensuring graceful failure on corrupt or unexpected data helps strengthen reliability.
133-144
: Helper function simplifies test setup.
createTestZip
centralizes zip creation for multiple scenarios. This fosters reuse and keeps tests clean.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (2)
asset/infrastructure/gcs/client.go (2)
19-33
: Consider exporting error constants for testingThe error constants are well-defined, but consider exporting them (capitalizing) to allow for precise error checking in unit tests.
const ( - errFailedToCreateClient = "failed to create client: %w" + ErrFailedToCreateClient = "failed to create client: %w" - errAssetAlreadyExists = "asset already exists: %s" + ErrAssetAlreadyExists = "asset already exists: %s" // ... similar changes for other constants )
206-227
: Consider using batch operations for DeleteAllThe DeleteAll method could be optimized using batch operations for better performance.
Consider implementing batched deletions to reduce the number of API calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
asset/infrastructure/gcs/client_test.go (6)
12-14
: Increase concurrency safety for the mock client.The
mockClient
currently uses an unguarded map for storing objects. If parallel tests or concurrency were introduced, there could be potential data races or inconsistencies.
16-21
: Consider testing partial content or large object scenarios.While
mockObject
is sufficient for most simple test cases, consider adding tests to handle partial content reads, very large content, or streaming scenarios to ensure broader coverage of potential real-world use cases.
44-95
: Extend test coverage to include bucket name edge cases.Although the table-driven tests are comprehensive for base URLs, it would be beneficial to include scenarios such as an empty bucket name, special characters in the bucket name, or any constraints imposed by GCS naming rules.
168-184
: Test rename collisions or overwrite scenarios.The
TestClient_Move
test successfully verifies the “move” logic, but it does not cover a scenario where the destination object already exists. Consider testing the edge case of overwriting or failing if the destination path is occupied.
186-203
: Assess the security implications for publicly available object URLs.When generating public object URLs, ensure there is no exposure of sensitive metadata or inadvertently accessible private objects. Include additional coverage or logic for restricted access scenarios if relevant.
205-264
: Incorporate more fine-grained path validation checks.While
TestClient_GetIDFromURL
verifies overall URL correctness, consider stronger checks for sub-paths or reserved keywords. This ensures IDs in different directory structures or subfolders don’t break the path resolution logic.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
asset/domain/asset.go
(1 hunks)asset/infrastructure/gcs/client_test.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- asset/domain/asset.go
🔇 Additional comments (2)
asset/infrastructure/gcs/client_test.go (2)
97-154
: Validate error handling more thoroughly.The CRUD tests use
mock.shouldError = true
to simulate errors, but the actual test only sets the flag without a scenario that triggers or verifies a higher-level error path from theClient
. Consider adding specific assertions for the code paths that handle underlying errors.Would you like me to provide a test snippet or shell script to locate and parse existing client error-handling points?
266-275
: Adhere strictly to domain constraints when constructing object paths.Currently,
objectPath
appends the domain ID directly tobasePath
. If the domain ever places constraints on valid IDs, ensure those are enforced or tested here (e.g., restricted characters, maximum length).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
asset/infrastructure/decompress/zip.go (1)
30-82
:⚠️ Potential issueConcurrency-based data race in
DecompressWithContent
The pipeline logs indicate a data race, as thearchive/zip
package doesn't guarantee thread-safe concurrent file reads from the samezip.Reader
. Removing or restricting concurrency helps avoid undefined behavior. Below is a single-threaded approach, adapted from your past commit's review:func (d *ZipDecompressor) DecompressWithContent(ctx context.Context, content []byte) (<-chan repository.DecompressedFile, error) { zipReader, err := zip.NewReader(bytes.NewReader(content), int64(len(content))) if err != nil { return nil, fmt.Errorf("failed to create zip reader: %w", err) } resultChan := make(chan repository.DecompressedFile, len(zipReader.File)) - var wg sync.WaitGroup - go func() { - wg.Wait() - close(resultChan) - }() // Process each file in the zip archive for _, f := range zipReader.File { if f.FileInfo().IsDir() || isHiddenFile(f.Name) { continue } - wg.Add(1) - go func(f *zip.File) { - defer wg.Done() - ... - }(f) + content, err := d.processFile(f) + if err != nil { + resultChan <- repository.DecompressedFile{ + Filename: f.Name, + Error: err, + } + continue + } + resultChan <- repository.DecompressedFile{ + Filename: f.Name, + Content: content, + } } + close(resultChan) return resultChan, nil }🧰 Tools
🪛 GitHub Actions: CI
[error] Data race detected in ZipDecompressor.DecompressWithContent between goroutine writing to memory and goroutine reading from same memory location
🧹 Nitpick comments (1)
asset/infrastructure/decompress/zip.go (1)
136-144
: Potential large memory usage during file compression
Here, you read the entire file content into memory before writing to the ZIP archive. For large files, this leads to increased memory consumption. A streaming approach (copying data in chunks) might be more memory efficient.🧰 Tools
🪛 GitHub Actions: CI
[error] Data race detected in ZipDecompressor.DecompressWithContent between goroutine writing to memory and goroutine reading from same memory location
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
asset/infrastructure/decompress/zip.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- asset/infrastructure/gcs/client.go
🧰 Additional context used
🪛 GitHub Actions: CI
asset/infrastructure/decompress/zip.go
[error] Data race detected in ZipDecompressor.DecompressWithContent between goroutine writing to memory and goroutine reading from same memory location
🔇 Additional comments (1)
asset/infrastructure/decompress/zip.go (1)
199-201
: UsingisHiddenFile
is a neat check
The logic to skip hidden files is sensible and straightforward. Great job keeping hidden files out of processing.🧰 Tools
🪛 GitHub Actions: CI
[error] Data race detected in ZipDecompressor.DecompressWithContent between goroutine writing to memory and goroutine reading from same memory location
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
asset/domain/asset.go (2)
80-81
: Rename theerror
field to avoid confusion with the built-in error type.The
error
field may cause confusion or overshadowing given thaterror
is also a special type in Go. Consider renaming it, for example, tofailureReason
orerrorMessage
.- error string + errorMsg string
120-131
: Validate non-empty fields before applying updates.To guard against unintended empty values, consider validating that
name
,url
, andcontentType
meet your domain constraints before assigning them.func (a *Asset) UpdateMetadata(name, url, contentType string) { + // Example check: + // if name == "" { + // return // or handle error / revert + // } if name != "" { a.name = name } ...id/id.go (1)
5-8
: Add doc comments for each type to improve clarity.Currently,
Asset
,Group
,Project
, andWorkspace
are empty structs lacking documentation. Consider adding short descriptions for maintainability, clarity, and tooling support.type Asset struct{} +// Asset represents an asset domain object with specialized behavior and fields. type Group struct{} +// Group represents a grouping structure. ...asset/infrastructure/gcs/client.go (1)
196-209
: Consider rollback for partial failures in Move.When copying an object before deletion, a partial failure can leave the asset in an inconsistent state if the deletion succeeds but the copy fails, or vice versa. Consider adding a transactional approach or explicit check after copying.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
asset/domain/asset.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)asset/infrastructure/gcs/client_test.go
(1 hunks)asset/pubsub/pubsub.go
(1 hunks)asset/service.go
(1 hunks)id/id.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- asset/pubsub/pubsub.go
- asset/service.go
- asset/infrastructure/gcs/client_test.go
🔇 Additional comments (3)
asset/infrastructure/gcs/client.go (3)
44-64
: Add input validation for bucketName and gracefully close the client.This logic does not handle an empty or invalid bucket name, and the created GCS client is never closed. Consider returning an error for a blank bucket name and implementing a deferred or contextual cleanup.
func NewClient(ctx context.Context, bucketName string, basePath string, baseURL string) (*Client, error) { + if bucketName == "" { + return nil, fmt.Errorf("bucket name cannot be empty") + } client, err := storage.NewClient(ctx) if err != nil { return nil, fmt.Errorf(errFailedToCreateClient, err) } + // Close the client once ctx is canceled: + go func() { + <-ctx.Done() + client.Close() + }()
66-82
: Add input validation for the Create method.If
asset
orasset.ID()
is empty, the code could fail unexpectedly. Return an error early if the asset is invalid or the ID is empty.func (c *Client) Create(ctx context.Context, asset *domain.Asset) error { + if asset == nil || asset.ID().String() == "" { + return fmt.Errorf("invalid asset data: nil or empty ID") + } obj := c.getObject(asset.ID()) ...
126-154
: Implement optional pagination for large dataset handling in List.Retrieving all objects at once can be costly. Consider using pagination with
storage.Query.StartOffset
andstorage.Query.MaxResults
to handle large buckets more efficiently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (9)
asset/graphql/schema.resolvers.go (3)
14-47
: Handle partial failures for asset creation vs. file upload
Currently, the asset metadata is created first, then the file is uploaded. If the upload fails, the partial asset record might remain in the repository. Consider implementing a transactional or rollback approach (or a cleanup mechanism) to avoid orphaned entries in case of upload failures.
50-77
: Consider rollback if pre-signed URL creation fails
When the signed URL generation fails, we already have a newly created asset. Without a rollback or cleanup, the repository may contain incomplete entries. Consider handling cleanup in case of errors.
80-95
: Reevaluate forcing status to active
When callingasset.UpdateStatus(domain.StatusActive, "")
, consider whether the asset might remain in a transitional status (e.g., uploading, extracting) and only set it to active once all processes complete.asset/graphql/helper.go (2)
10-12
: Validate file pointer before usage
When working with uploads, consider adding nil checks or size checks to handle unexpected issues with the uploaded file (e.g., empty, corrupted pointer).
14-40
: Ensure correct error propagation
WithinAssetFromDomain
, the asset’sError()
is mapped to a string pointer. If further handling of domain-specific errors is needed in the resolver layer, ensure they’re surfaced appropriately (e.g., logging or additional context).asset/service/service.go (1)
19-45
: Transaction or atomicity for complex operations
All methods in theService
directly call the repository’s methods. If multiple operations need to happen atomically (e.g., create + upload, or update + file operations), consider transaction handling or compensating actions to maintain consistent states.asset/graphql/gqlgen.yml (1)
32-32
: Fix formatting issues.There are two minor formatting issues:
- Remove the trailing space after
graphql.Upload
- Add a newline at the end of the file
🧰 Tools
🪛 yamllint (1.35.1)
[error] 32-32: no new line character at the end of file
(new-line-at-end-of-file)
[error] 32-32: trailing spaces
(trailing-spaces)
asset/graphql/schema.graphql (2)
13-18
: Consider adding a FAILED status.The
AssetStatus
enum currently uses ERROR for failure states. Consider adding a distinctFAILED
status to differentiate between temporary errors and permanent failures. This would help in better error handling and retry logic implementation.enum AssetStatus { PENDING ACTIVE EXTRACTING ERROR + FAILED }
1-60
: Add field descriptions for better documentation.Consider adding descriptions to types and fields using GraphQL descriptions for better API documentation:
Example:
""" Represents an uploaded asset in the system. """ type Asset { """ Unique identifier for the asset. """ id: ID! """ Original filename of the uploaded asset. """ name: String! # ... other fields }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
asset/graphql/gqlgen.yml
(1 hunks)asset/graphql/helper.go
(1 hunks)asset/graphql/model.go
(1 hunks)asset/graphql/resolver.go
(1 hunks)asset/graphql/schema.graphql
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/service/service.go
(1 hunks)
🧰 Additional context used
🪛 yamllint (1.35.1)
asset/graphql/gqlgen.yml
[error] 32-32: no new line character at the end of file
(new-line-at-end-of-file)
[error] 32-32: trailing spaces
(trailing-spaces)
🔇 Additional comments (5)
asset/graphql/schema.resolvers.go (1)
105-108
: Resolver pattern looks good
Returning amutationResolver
that embeds*Resolver
is a clean approach. The code is straightforward and adheres to gqlgen patterns.asset/graphql/resolver.go (1)
1-15
: Dependency injection is straightforward
Using a dedicatedResolver
struct with a constructor (NewResolver
) enables clear service dependency injection. This design choice simplifies testing and promotes maintainability.asset/graphql/model.go (2)
14-24
: Confirm integer capacity for large file sizes
TheSize
field is declared asint
. If the system handles very large files, consider using a 64-bit integer (int64
) to store file sizes to avoid integer overflow.
60-103
: Enum usage is consistent
ProvidingAssetStatus
constants and validating them inUnmarshalGQL
is a good practice. This helps avoid invalid states and ensures consistent status handling throughout the system.asset/graphql/gqlgen.yml (1)
1-32
: Verify gqlgen version compatibility.The configuration looks good, but we should ensure compatibility with the latest stable version of gqlgen.
🧰 Tools
🪛 yamllint (1.35.1)
[error] 32-32: no new line character at the end of file
(new-line-at-end-of-file)
[error] 32-32: trailing spaces
(trailing-spaces)
input UpdateAssetMetadataInput { | ||
id: ID! | ||
name: String! | ||
size: Int! | ||
contentType: String! | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation constraints for size and content type.
The UpdateAssetMetadataInput
should include validation constraints for:
- Maximum file size
- Allowed content types
- Name format/length
Consider using GraphQL directives for validation:
input UpdateAssetMetadataInput {
id: ID!
name: String! @constraint(minLength: 1, maxLength: 255)
size: Int! @constraint(min: 0, max: 104857600) # Example: max 100MB
contentType: String! @constraint(pattern: "^[a-zA-Z0-9]+/[a-zA-Z0-9+.-]+$")
}
asset/graphql/schema.graphql
Outdated
type Asset { | ||
id: ID! | ||
name: String! | ||
size: Int! | ||
contentType: String! | ||
url: String | ||
status: AssetStatus! | ||
error: String | ||
createdAt: Time! | ||
updatedAt: Time! | ||
} | ||
|
||
enum AssetStatus { | ||
PENDING | ||
ACTIVE | ||
EXTRACTING | ||
ERROR | ||
} | ||
|
||
scalar Time | ||
scalar Upload | ||
|
||
type Mutation { | ||
# Direct upload mutation | ||
uploadAsset(input: UploadAssetInput!): UploadAssetPayload! | ||
|
||
# Get signed URL for upload | ||
getAssetUploadURL(input: GetAssetUploadURLInput!): GetAssetUploadURLPayload! | ||
|
||
# Update asset metadata after signed URL upload | ||
updateAssetMetadata(input: UpdateAssetMetadataInput!): UpdateAssetMetadataPayload! | ||
} | ||
|
||
input UploadAssetInput { | ||
id: ID! | ||
file: Upload! | ||
} | ||
|
||
type UploadAssetPayload { | ||
asset: Asset! | ||
} | ||
|
||
input GetAssetUploadURLInput { | ||
id: ID! | ||
} | ||
|
||
type GetAssetUploadURLPayload { | ||
uploadURL: String! | ||
} | ||
|
||
input UpdateAssetMetadataInput { | ||
id: ID! | ||
name: String! | ||
size: Int! | ||
contentType: String! | ||
} | ||
|
||
type UpdateAssetMetadataPayload { | ||
asset: Asset! | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add Query type for asset retrieval operations.
The schema only defines mutations but lacks query operations. Consider adding a Query type for retrieving assets:
type Query {
asset(id: ID!): Asset
assets(
first: Int
after: String
filter: AssetFilter
): AssetConnection!
}
input AssetFilter {
status: AssetStatus
contentType: String
createdAfter: Time
createdBefore: Time
}
type AssetConnection {
edges: [AssetEdge!]!
pageInfo: PageInfo!
}
type AssetEdge {
node: Asset!
cursor: String!
}
type PageInfo {
hasNextPage: Boolean!
endCursor: String
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (2)
asset/graphql/schema.graphql (1)
13-21
: Consider adding DELETED status to AssetStatus enum.Adding a DELETED status would enable soft deletion functionality, which is often useful for data recovery and audit purposes.
enum AssetStatus { PENDING ACTIVE EXTRACTING ERROR + DELETED }
asset/graphql/model.go (1)
86-129
: Optimize AssetStatus validation using a map.Consider using a map for status validation to avoid maintaining the switch statement when new statuses are added.
var AllAssetStatus = []AssetStatus{ AssetStatusPending, AssetStatusActive, AssetStatusExtracting, AssetStatusError, } +var validAssetStatus = map[AssetStatus]bool{ + AssetStatusPending: true, + AssetStatusActive: true, + AssetStatusExtracting: true, + AssetStatusError: true, +} + func (e AssetStatus) IsValid() bool { - switch e { - case AssetStatusPending, AssetStatusActive, AssetStatusExtracting, AssetStatusError: - return true - } - return false + return validAssetStatus[e] }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
asset/graphql/model.go
(1 hunks)asset/graphql/schema.graphql
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)
🔇 Additional comments (4)
asset/graphql/schema.graphql (3)
1-11
: Well-structured Asset type definition!The Asset type includes all necessary fields with appropriate nullability constraints and audit timestamps.
67-72
: Add validation constraints for size and content type.The input type should include validation constraints for maximum file size, allowed content types, and name format/length.
23-28
: 🛠️ Refactor suggestionAdd pagination and filtering to assets query.
The current implementation could face performance issues with large datasets. Consider adding:
- Cursor-based pagination
- Filtering options (e.g., by status, date range)
type Query { # Get a single asset by ID asset(id: ID!): Asset! # List all assets - assets: [Asset!]! + assets( + first: Int + after: String + filter: AssetFilter + ): AssetConnection! +} + +input AssetFilter { + status: AssetStatus + createdAfter: Time + createdBefore: Time + contentType: String }Likely invalid or redundant comment.
asset/graphql/model.go (1)
14-24
: Well-defined Asset struct with proper JSON tags!The struct definition aligns perfectly with the GraphQL schema and includes appropriate JSON tags.
input MoveAssetInput { | ||
id: ID! | ||
toWorkspaceId: ID | ||
toProjectId: ID | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Add validation for destination fields in MoveAssetInput
The concern is valid and particularly important as:
- The resolver is not yet implemented, making this the right time to add validation
- Current schema allows both
toWorkspaceId
andtoProjectId
to be null - No validation logic exists in the current implementation
Consider implementing the validation in the resolver to ensure at least one destination field is provided before proceeding with the asset move operation.
🔗 Analysis chain
Add validation for destination fields in MoveAssetInput.
The current implementation allows both toWorkspaceId
and toProjectId
to be null, which could lead to invalid states. Consider:
- Adding a validation directive to ensure at least one destination is provided
- Adding documentation about the expected behavior when both fields are provided
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if there are any existing validations for MoveAssetInput
ast-grep --pattern 'func validateMoveAssetInput($$$)'
Length of output: 55
Script:
#!/bin/bash
# Look for resolvers handling asset moves
rg -l "MoveAssetInput" --type ts --type js
# Check for any validation directives in schema
rg "@" asset/graphql/schema.graphql
# Look for any validation utilities
rg -l "validate.*Asset" --type ts --type js
Length of output: 422
Script:
#!/bin/bash
# List repository structure to understand file organization
fd . --type f
# Try finding GraphQL schema file
fd schema.graphql
# Use debug mode with ripgrep
rg --debug "MoveAssetInput"
Length of output: 11529
Script:
#!/bin/bash
# Check resolver implementation for validation
rg -A 10 "func.*MoveAsset.*Context" asset/graphql/schema.resolvers.go
# Check if there's any validation in the service layer
rg -A 10 "MoveAsset" asset/service/service.go
Length of output: 592
asset/graphql/schema.resolvers.go
Outdated
// UploadAsset is the resolver for the uploadAsset field. | ||
func (r *mutationResolver) UploadAsset(ctx context.Context, input UploadAssetInput) (*UploadAssetPayload, error) { | ||
id, err := domain.IDFrom(input.ID) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
// Create asset metadata | ||
asset := domain.NewAsset( | ||
id, | ||
input.File.Filename, | ||
input.File.Size, | ||
input.File.ContentType, | ||
) | ||
|
||
// Create asset metadata first | ||
if err := r.assetService.Create(ctx, asset); err != nil { | ||
return nil, err | ||
} | ||
|
||
// Upload file content | ||
if err := r.assetService.Upload(ctx, id, FileFromUpload(&input.File)); err != nil { | ||
return nil, err | ||
} | ||
|
||
// Update asset status to active | ||
asset.UpdateStatus(domain.StatusActive, "") | ||
if err := r.assetService.Update(ctx, asset); err != nil { | ||
return nil, err | ||
} | ||
|
||
return &UploadAssetPayload{ | ||
Asset: AssetFromDomain(asset), | ||
}, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add transaction handling and cleanup for upload failures.
The current implementation could leave orphaned metadata if the file upload fails. Consider:
- Implementing a transaction-like pattern
- Adding cleanup logic for partial failures
func (r *mutationResolver) UploadAsset(ctx context.Context, input UploadAssetInput) (*UploadAssetPayload, error) {
id, err := domain.IDFrom(input.ID)
if err != nil {
return nil, err
}
// Create asset metadata
asset := domain.NewAsset(
id,
input.File.Filename,
input.File.Size,
input.File.ContentType,
)
// Create asset metadata first
if err := r.assetService.Create(ctx, asset); err != nil {
return nil, err
}
+ // Ensure cleanup on failure
+ success := false
+ defer func() {
+ if !success {
+ _ = r.assetService.Delete(ctx, id)
+ }
+ }()
// Upload file content
if err := r.assetService.Upload(ctx, id, FileFromUpload(&input.File)); err != nil {
return nil, err
}
// Update asset status to active
asset.UpdateStatus(domain.StatusActive, "")
if err := r.assetService.Update(ctx, asset); err != nil {
return nil, err
}
+ success = true
return &UploadAssetPayload{
Asset: AssetFromDomain(asset),
}, nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// UploadAsset is the resolver for the uploadAsset field. | |
func (r *mutationResolver) UploadAsset(ctx context.Context, input UploadAssetInput) (*UploadAssetPayload, error) { | |
id, err := domain.IDFrom(input.ID) | |
if err != nil { | |
return nil, err | |
} | |
// Create asset metadata | |
asset := domain.NewAsset( | |
id, | |
input.File.Filename, | |
input.File.Size, | |
input.File.ContentType, | |
) | |
// Create asset metadata first | |
if err := r.assetService.Create(ctx, asset); err != nil { | |
return nil, err | |
} | |
// Upload file content | |
if err := r.assetService.Upload(ctx, id, FileFromUpload(&input.File)); err != nil { | |
return nil, err | |
} | |
// Update asset status to active | |
asset.UpdateStatus(domain.StatusActive, "") | |
if err := r.assetService.Update(ctx, asset); err != nil { | |
return nil, err | |
} | |
return &UploadAssetPayload{ | |
Asset: AssetFromDomain(asset), | |
}, nil | |
} | |
// UploadAsset is the resolver for the uploadAsset field. | |
func (r *mutationResolver) UploadAsset(ctx context.Context, input UploadAssetInput) (*UploadAssetPayload, error) { | |
id, err := domain.IDFrom(input.ID) | |
if err != nil { | |
return nil, err | |
} | |
// Create asset metadata | |
asset := domain.NewAsset( | |
id, | |
input.File.Filename, | |
input.File.Size, | |
input.File.ContentType, | |
) | |
// Create asset metadata first | |
if err := r.assetService.Create(ctx, asset); err != nil { | |
return nil, err | |
} | |
// Ensure cleanup on failure | |
success := false | |
defer func() { | |
if !success { | |
_ = r.assetService.Delete(ctx, id) | |
} | |
}() | |
// Upload file content | |
if err := r.assetService.Upload(ctx, id, FileFromUpload(&input.File)); err != nil { | |
return nil, err | |
} | |
// Update asset status to active | |
asset.UpdateStatus(domain.StatusActive, "") | |
if err := r.assetService.Update(ctx, asset); err != nil { | |
return nil, err | |
} | |
success = true | |
return &UploadAssetPayload{ | |
Asset: AssetFromDomain(asset), | |
}, nil | |
} |
asset/graphql/schema.resolvers.go
Outdated
// DeleteAsset is the resolver for the deleteAsset field. | ||
func (r *mutationResolver) DeleteAsset(ctx context.Context, input DeleteAssetInput) (*DeleteAssetPayload, error) { | ||
panic(fmt.Errorf("not implemented: DeleteAsset - deleteAsset")) | ||
} | ||
|
||
// DeleteAssets is the resolver for the deleteAssets field. | ||
func (r *mutationResolver) DeleteAssets(ctx context.Context, input DeleteAssetsInput) (*DeleteAssetsPayload, error) { | ||
panic(fmt.Errorf("not implemented: DeleteAssets - deleteAssets")) | ||
} | ||
|
||
// MoveAsset is the resolver for the moveAsset field. | ||
func (r *mutationResolver) MoveAsset(ctx context.Context, input MoveAssetInput) (*MoveAssetPayload, error) { | ||
panic(fmt.Errorf("not implemented: MoveAsset - moveAsset")) | ||
} | ||
|
||
// Asset is the resolver for the asset field. | ||
func (r *queryResolver) Asset(ctx context.Context, id string) (*Asset, error) { | ||
panic(fmt.Errorf("not implemented: Asset - asset")) | ||
} | ||
|
||
// Assets is the resolver for the assets field. | ||
func (r *queryResolver) Assets(ctx context.Context) ([]*Asset, error) { | ||
panic(fmt.Errorf("not implemented: Assets - assets")) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace panic with proper error handling in unimplemented resolvers.
Using panic for unimplemented methods is dangerous in production. Instead, return appropriate errors.
func (r *queryResolver) Asset(ctx context.Context, id string) (*Asset, error) {
- panic(fmt.Errorf("not implemented: Asset - asset"))
+ return nil, fmt.Errorf("operation not implemented: Asset query")
}
func (r *queryResolver) Assets(ctx context.Context) ([]*Asset, error) {
- panic(fmt.Errorf("not implemented: Assets - assets"))
+ return nil, fmt.Errorf("operation not implemented: Assets query")
}
Committable suggestion skipped: line range outside the PR's diff.
asset/graphql/schema.resolvers.go
Outdated
// UpdateAssetMetadata is the resolver for the updateAssetMetadata field. | ||
func (r *mutationResolver) UpdateAssetMetadata(ctx context.Context, input UpdateAssetMetadataInput) (*UpdateAssetMetadataPayload, error) { | ||
id, err := domain.IDFrom(input.ID) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
// Get existing asset | ||
asset, err := r.assetService.Get(ctx, id) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
// Update metadata | ||
asset.UpdateMetadata(input.Name, "", input.ContentType) | ||
asset.UpdateStatus(domain.StatusActive, "") | ||
|
||
if err := r.assetService.Update(ctx, asset); err != nil { | ||
return nil, err | ||
} | ||
|
||
return &UpdateAssetMetadataPayload{ | ||
Asset: AssetFromDomain(asset), | ||
}, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Reconsider unconditional status transition to Active
The UpdateAssetMetadata resolver unconditionally sets the status to Active, which could potentially interrupt ongoing operations. Consider:
- Validating if the current status allows transition to Active
- Preserving the current status if the asset is in EXTRACTING state
- Adding transition validation to prevent skipping important states
The URL handling is implemented correctly and won't clear existing URLs when empty strings are provided.
🔗 Analysis chain
Review status transition and URL handling in UpdateAssetMetadata.
Two potential issues:
- The status is always set to Active without checking the current status
- The URL parameter in UpdateMetadata is passed as an empty string, which might unintentionally clear existing URLs
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for URL handling in UpdateMetadata implementation
ast-grep --pattern 'func ($v *Asset) UpdateMetadata($$$) {
$$$
}'
Length of output: 69
Script:
#!/bin/bash
# Find Asset struct definition and UpdateMetadata method
rg "type Asset struct" -A 5
rg "func.*UpdateMetadata" -A 5
Length of output: 1145
Script:
#!/bin/bash
# Check status-related implementations
rg "func.*UpdateStatus" -A 5
rg "type.*Status.*=" -A 5
Length of output: 311
Script:
#!/bin/bash
# Find Status type definition with different pattern
rg "Status.*=.*iota" -B 5 -A 5
rg "type Status" -B 2 -A 10
Length of output: 566
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (10)
asset/domain/asset.go (4)
38-60
: Mock ID Generation May Cause Race Conditions in Parallel Tests
Overwriting the global ID generator in tests can lead to unexpected collisions under parallel test execution. You could mitigate this by ensuring your tests don’t run in parallel or by using more advanced mocking strategies (e.g., ephemeral test contexts or local scoping).
62-69
: Consider Noramlized Enumeration for Status
The string-basedStatus
constants work, but using typed enums (via iota) or a small specialized type could reduce the risk of typos and make it easier to manage status transitions.
71-84
: Avoid Overriding Builtinerror
Terminology
Theerror
field withinAsset
can be confused with the standard Goerror
type. Consider renaming it to something likeerrMessage
orfailureReason
to avoid shadowing potential references to Go’s builtinerror
.- error string + errMessage string
86-97
: Optional: Support Flexible Asset Creation
NewAsset
doesn’t allow initializing thegroupID
,projectID
, orworkspaceID
, requiring them to be set afterwards. If that’s a deliberate design choice, clarify it. Otherwise, an alternate constructor accepting more parameters might help reduce the number of steps needed to properly configure anAsset
.asset/service/service.go (1)
11-13
: Validate Repository Dependency
While anil
check forrepo
is not mandatory, you might consider returning an error or panic ifrepo
is nil to help diagnose incorrect usage.asset/graphql/schema.resolvers.go (5)
49-77
: Prevent Partial Metadata Entry
You create an emptyAsset
record to generate an upload URL, but if the client never uses that URL, you may have unnecessary records. Consider a strategy to clean unused assets or delay creation until an upload is confirmed.
105-119
: Return Detailed Data After Deletion
Returning the deletedAsset
rather than just its ID can offer a more consistent user experience, providing confirmation of the exact resource that was removed.
121-141
: Consider Bulk Operation with Rollback
When deleting multiple assets, a failure on one asset means that some are deleted while others remain. A bulk operation with partial rollback could ensure consistent system state.
180-193
: Consider Caching Frequently Accessed Assets
IfAsset
objects are frequently fetched, implementing a caching strategy could improve performance and reduce load on the underlying repository.
195-208
: Support Pagination and Filtering
For large asset catalogs, a simpleAssets
query that returns all assets can be inefficient. Consider adding pagination (e.g., first/after) or filtering capabilities to handle growing datasets gracefully.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
asset/domain/asset.go
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/service/service.go
(1 hunks)
🔇 Additional comments (7)
asset/domain/asset.go (3)
9-12
: Use Type Aliases Judiciously
These aliases make the domain code more expressive, but ensure that it doesn't introduce ambiguity or hamper readability. Provide clear documentation so future maintainers understand their meaning.
14-36
: Document Concurrency Constraints for ID Generators
Since these variables are globally scoped, concurrent tests or processes might inadvertently share the same generator logic. Consider documenting concurrency constraints or providing isolation, such as separate test environments or concurrency-safe factories.
113-118
: Assess Partial Status Updates
UpdateStatus
replaces the asset’s status and error unconditionally. If more complex workflows arise, you might need to validate the current status before transitioning. This could help avoid overwriting important states or losing crucial error details.asset/service/service.go (1)
15-17
: Constructor Looks Good
NewService
properly sets up the service struct with a repository dependency, making it straightforward to inject mocks or test doubles.asset/graphql/schema.resolvers.go (3)
13-47
: Add Transaction Handling and Cleanup for Upload Failures
If the content upload fails after the asset metadata is created, you risk leaving orphaned assets in the system. Implementing a transaction-like pattern or a companion delete on failure is strongly recommended.
79-103
: Reconsider Unconditional Status Transition to Active
Setting the status toACTIVE
here might interrupt ongoing operations (e.g., extraction, scanning). Validate the asset’s current status before overwriting it to avoid skipping crucial states.
143-178
: Validate Destination Workspace or Project
Before moving an asset, confirm that the target workspace or project exists and is accessible by the user. Lack of checks could disrupt domain integrity.
asset/domain/asset.go
Outdated
func (a *Asset) UpdateMetadata(name, url, contentType string) { | ||
if name != "" { | ||
a.name = name | ||
} | ||
if url != "" { | ||
a.url = url | ||
} | ||
if contentType != "" { | ||
a.contentType = contentType | ||
} | ||
a.updatedAt = time.Now() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validate Workspace or Project Exist Before Movement
MoveToWorkspace
and MoveToProject
update workspaceID
and projectID
without verifying that the provided IDs exist. To enforce stronger domain integrity, consider performing existence checks or throwing errors if they are invalid.
Also applies to: 133-141
asset/service/service.go
Outdated
func (s *Service) Create(ctx context.Context, asset *domain.Asset) error { | ||
return s.repo.Create(ctx, asset) | ||
} | ||
|
||
func (s *Service) Get(ctx context.Context, id domain.ID) (*domain.Asset, error) { | ||
return s.repo.Read(ctx, id) | ||
} | ||
|
||
func (s *Service) Update(ctx context.Context, asset *domain.Asset) error { | ||
return s.repo.Update(ctx, asset) | ||
} | ||
|
||
func (s *Service) Delete(ctx context.Context, id domain.ID) error { | ||
return s.repo.Delete(ctx, id) | ||
} | ||
|
||
func (s *Service) Upload(ctx context.Context, id domain.ID, content io.Reader) error { | ||
return s.repo.Upload(ctx, id, content) | ||
} | ||
|
||
func (s *Service) Download(ctx context.Context, id domain.ID) (io.ReadCloser, error) { | ||
return s.repo.Download(ctx, id) | ||
} | ||
|
||
func (s *Service) GetUploadURL(ctx context.Context, id domain.ID) (string, error) { | ||
return s.repo.GetUploadURL(ctx, id) | ||
} | ||
|
||
func (s *Service) List(ctx context.Context) ([]*domain.Asset, error) { | ||
return s.repo.List(ctx) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add Higher-Level Validation in Service Layer
Your methods (Create, Update, Delete, etc.) pass through to the repository. Consider adding domain-level validation (e.g., ensuring the Asset
is in a valid state before creation) or more robust error handling (e.g., typed errors for not found vs. invalid input).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
asset/infrastructure/gcs/client_test.go (4)
67-69
: Enhance mock implementation of CopierFrom method.The current implementation returns an empty
storage.Copier{}
which may not accurately simulate the GCS behavior. Consider implementing basic copying functionality to make tests more realistic.func (o *mockObject) CopierFrom(src *storage.ObjectHandle) *storage.Copier { - return &storage.Copier{} + // TODO: Implement basic copying functionality + return &storage.Copier{ + ObjectAttrs: o.attrs, + // Add necessary fields and methods + } }
242-259
: Consider logging or error aggregation for invalid IDs.The
List
method silently skips objects with invalid IDs. Consider either logging these cases or aggregating them in an error slice to help with debugging.func (c *testClient) List(ctx context.Context) ([]*domain.Asset, error) { var assets []*domain.Asset + var errs []error for _, obj := range c.mockBucket.objects { id, err := domain.IDFrom(path.Base(obj.name)) if err != nil { + errs = append(errs, fmt.Errorf("invalid ID in object %s: %w", obj.name, err)) continue } // ... rest of the code } + if len(errs) > 0 { + return assets, fmt.Errorf("some objects had invalid IDs: %v", errs) + } return assets, nil }
437-442
: Enhance error test cases with message assertions.The error test cases should verify the specific error messages to ensure correct error handling.
func TestClient_Read_NotFound(t *testing.T) { client := newTestClient(t) id := domain.NewID() - _, err := client.Read(context.Background(), id) + asset, err := client.Read(context.Background(), id) + assert.Nil(t, asset) assert.Error(t, err) + assert.Contains(t, err.Error(), fmt.Sprintf(errAssetNotFound, id)) }Also applies to: 444-456, 458-463
485-504
: Convert URL validation tests to table-driven tests.The URL validation test cases would be more maintainable as a table-driven test.
+func TestClient_GetIDFromURL_Validation(t *testing.T) { + tests := []struct { + name string + url string + wantErr string + }{ + { + name: "invalid URL", + url: "invalid-url", + wantErr: "invalid URL format", + }, + { + name: "mismatched host", + url: "https://different-host.com/test-path/123", + wantErr: "invalid host", + }, + { + name: "empty path", + url: "https://storage.googleapis.com", + wantErr: "invalid path", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + client := newTestClient(t) + _, err := client.GetIDFromURL(tt.url) + assert.Error(t, err) + assert.Contains(t, err.Error(), tt.wantErr) + }) + } +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
asset/infrastructure/gcs/client_test.go
(1 hunks)
🔇 Additional comments (1)
asset/infrastructure/gcs/client_test.go (1)
1-734
: Overall implementation looks good!The test file provides comprehensive coverage with well-structured mock implementations. The suggested improvements are mainly for enhanced robustness and maintainability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
asset/service/service.go (1)
24-54
: 🛠️ Refactor suggestionAdd domain-level validation and error handling in Service methods
The
Service
methods (Create
,Update
,Delete
, etc.) currently pass through directly to the repository without additional logic. Consider adding domain-level validations (e.g., checking the validity ofAsset
before creation or update) and more robust error handling (e.g., distinguishing between different error types) to enhance the robustness of the service layer.
🧹 Nitpick comments (2)
asset/service/service.go (2)
17-22
: Allow injectingDecompressor
intoService
constructor for flexibilityCurrently, the
NewService
function initializes thedecompressor
field with a specific implementation (decompress.NewZipDecompressor()
), which reduces flexibility and testability. Consider accepting arepository.Decompressor
as a parameter toNewService
to allow for different decompressor implementations or mocking in tests.Apply this diff to update the constructor:
-func NewService(repo repository.PersistenceRepository) *Service { +func NewService(repo repository.PersistenceRepository, decompressor repository.Decompressor) *Service { return &Service{ repo: repo, - decompressor: decompress.NewZipDecompressor(), + decompressor: decompressor, } }Update any calls to
NewService
accordingly, passing in the desired decompressor implementation.
24-54
: Add documentation comments for exported methodsThe exported methods of the
Service
struct lack Go-style documentation comments. According to Go best practices, all exported functions and methods should have comments starting with the function name. This improves code readability and aids in generating documentation.Apply this diff to add documentation comments:
+// Create creates a new asset in the repository. func (s *Service) Create(ctx context.Context, asset *domain.Asset) error { return s.repo.Create(ctx, asset) } +// Get retrieves an asset by ID from the repository. func (s *Service) Get(ctx context.Context, id domain.ID) (*domain.Asset, error) { return s.repo.Read(ctx, id) } +// Update updates an existing asset in the repository. func (s *Service) Update(ctx context.Context, asset *domain.Asset) error { return s.repo.Update(ctx, asset) } +// Delete removes an asset by ID from the repository. func (s *Service) Delete(ctx context.Context, id domain.ID) error { return s.repo.Delete(ctx, id) } +// Upload uploads content for an asset. func (s *Service) Upload(ctx context.Context, id domain.ID, content io.Reader) error { return s.repo.Upload(ctx, id, content) } +// Download retrieves the content of an asset. func (s *Service) Download(ctx context.Context, id domain.ID) (io.ReadCloser, error) { return s.repo.Download(ctx, id) } +// GetUploadURL obtains an upload URL for an asset. func (s *Service) GetUploadURL(ctx context.Context, id domain.ID) (string, error) { return s.repo.GetUploadURL(ctx, id) } +// List retrieves all assets from the repository. func (s *Service) List(ctx context.Context) ([]*domain.Asset, error) { return s.repo.List(ctx) }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
asset/infrastructure/gcs/client_test.go (1)
261-269
:⚠️ Potential issueAdd prefix validation in DeleteAll method.
The
DeleteAll
method should validate the prefix to prevent accidental deletion of all objects.
🧹 Nitpick comments (6)
asset/infrastructure/gcs/client_test.go (2)
574-606
: Enhance List test assertions.The test should verify the actual content of returned assets, not just their count.
assets, err := client.List(context.Background()) assert.NoError(t, err) assert.Len(t, assets, len(objects)) + +// Verify each asset's content +for _, obj := range objects { + found := false + for _, asset := range assets { + if asset.ID() == obj.id { + assert.Equal(t, obj.name, asset.Name()) + assert.Equal(t, obj.contentType, asset.ContentType()) + found = true + break + } + } + assert.True(t, found, "Asset %s not found in results", obj.id) +}
644-649
: Improve prefix matching in DeleteAll test.The current string contains check might match unintended paths. Use
strings.HasPrefix
for exact prefix matching.for name := range client.mockBucket.objects { - if strings.Contains(name, "test-prefix") { + if strings.HasPrefix(name, path.Join(client.basePath, "test-prefix")) { t.Errorf("Object with test-prefix should be deleted: %s", name) } remainingCount++ }asset/repository/decompressor_repository.go (1)
26-31
: Consider adding file metadata to DecompressedFile structThe struct could benefit from additional metadata fields that might be useful for consumers:
- File size
- Last modified time
- File mode/permissions
type DecompressedFile struct { Filename string Content io.Reader Error error + Size int64 + ModTime time.Time + Mode os.FileMode }asset/infrastructure/decompress/zip_test.go (2)
12-49
: Enhance decompression tests with edge casesThe current test suite needs additional coverage:
- Large files (>1MB) to test memory handling
- Empty files
- Deep directory structures
- Special characters in filenames
Here's an example of additional test cases:
func TestZipDecompressor_DecompressWithContent(t *testing.T) { files := map[string]string{ "test1.txt": "Hello, World!", "test2.txt": "This is a test file", ".hidden": "This should be skipped", + "empty.txt": "", + "large.txt": strings.Repeat("a", 1024*1024), // 1MB file + "dir/nested/file.txt": "Nested file content", + "special_チャー.txt": "Special characters", } // ... rest of the test }🧰 Tools
🪛 GitHub Actions: CI
[error] 29: Test failure in TestZipDecompressor_DecompressWithContent due to race condition
93-129
: Add stress test for concurrent operationsThe context cancellation test should include stress testing with multiple concurrent operations.
func TestZipDecompressor_ContextCancellation(t *testing.T) { + // Add stress test + t.Run("stress test", func(t *testing.T) { + files := make(map[string]string) + for i := 0; i < 100; i++ { + files[fmt.Sprintf("file%d.txt", i)] = strings.Repeat("content", 1000) + } + + zipContent, err := createTestZip(files) + assert.NoError(t, err) + + d := NewZipDecompressor() + ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond) + defer cancel() + + resultChan, err := d.DecompressWithContent(ctx, zipContent) + assert.NoError(t, err) + + var wg sync.WaitGroup + wg.Add(1) + go func() { + defer wg.Done() + for range resultChan { + // Consume results + } + }() + + wg.Wait() + }) }🧰 Tools
🪛 GitHub Actions: CI
[error] 29: Test failure in TestZipDecompressor_DecompressWithContent due to race condition
asset/service/service.go (1)
56-65
: Add progress tracking for zip operationsThe zip operations could benefit from progress tracking for large files.
func (s *Service) DecompressZip(ctx context.Context, content []byte) (<-chan repository.DecompressedFile, error) { + log.Info("starting zip decompression", zap.Int("content_size", len(content))) + + // Wrap the result channel to add progress logging + resultChan, err := s.decompressor.DecompressWithContent(ctx, content) + if err != nil { + return nil, fmt.Errorf("failed to start decompression: %w", err) + } + + trackingChan := make(chan repository.DecompressedFile) + go func() { + defer close(trackingChan) + fileCount := 0 + for result := range resultChan { + fileCount++ + log.Info("decompressed file", + zap.String("filename", result.Filename), + zap.Int("file_count", fileCount)) + trackingChan <- result + } + }() + + return trackingChan, nil }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
asset/infrastructure/decompress/zip.go
(1 hunks)asset/infrastructure/decompress/zip_test.go
(1 hunks)asset/infrastructure/gcs/client_test.go
(1 hunks)asset/repository/decompressor_repository.go
(1 hunks)asset/service/service.go
(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: CI
asset/infrastructure/decompress/zip_test.go
[error] 29: Test failure in TestZipDecompressor_DecompressWithContent due to race condition
asset/infrastructure/decompress/zip.go
[error] Data race detected in ZipDecompressor.DecompressWithContent: concurrent read/write access to channel
[error] 59: Panic in TestZipDecompressor_InvalidZip: send on closed channel
🔇 Additional comments (3)
asset/infrastructure/gcs/client_test.go (1)
90-95
:⚠️ Potential issueAdd nil check to prevent panic in Close method.
The
Close()
method could panic if the object doesn't exist in the bucket.func (w *mockWriter) Close() error { obj := w.bucket.objects[w.objectName] + if obj == nil { + return fmt.Errorf("object not found: %s", w.objectName) + } obj.data = w.buf.Bytes() obj.attrs = w.attrs return nil }Likely invalid or redundant comment.
asset/infrastructure/decompress/zip.go (1)
31-83
:⚠️ Potential issueFix data race in concurrent file processing
The current implementation has a critical data race issue when processing files concurrently. The
archive/zip
package is not thread-safe for concurrent reads from the sameReader
. Additionally, the channel operations need better synchronization.Apply this fix to process files sequentially while maintaining async channel communication:
func (d *ZipDecompressor) DecompressWithContent(ctx context.Context, content []byte) (<-chan repository.DecompressedFile, error) { zipReader, err := zip.NewReader(bytes.NewReader(content), int64(len(content))) if err != nil { return nil, fmt.Errorf("failed to create zip reader: %w", err) } resultChan := make(chan repository.DecompressedFile, len(zipReader.File)) - var wg sync.WaitGroup go func() { - wg.Wait() - close(resultChan) + defer close(resultChan) + + for _, f := range zipReader.File { + if f.FileInfo().IsDir() || isHiddenFile(f.Name) { + continue + } + + select { + case <-ctx.Done(): + resultChan <- repository.DecompressedFile{ + Filename: f.Name, + Error: ctx.Err(), + } + return + default: + content, err := d.processFile(f) + if err != nil { + resultChan <- repository.DecompressedFile{ + Filename: f.Name, + Error: err, + } + continue + } + + resultChan <- repository.DecompressedFile{ + Filename: f.Name, + Content: content, + } + } + } }() - for _, f := range zipReader.File { - if f.FileInfo().IsDir() || isHiddenFile(f.Name) { - continue - } - - wg.Add(1) - go func(f *zip.File) { - defer wg.Done() - // ... existing processing code ... - }(f) - } return resultChan, nil }Likely invalid or redundant comment.
🧰 Tools
🪛 GitHub Actions: CI
[error] Data race detected in ZipDecompressor.DecompressWithContent: concurrent read/write access to channel
[error] 59: Panic in TestZipDecompressor_InvalidZip: send on closed channel
asset/service/service.go (1)
24-54
: 🛠️ Refactor suggestionEnhance service layer with validation and error handling
The service methods are simple pass-through calls. Consider adding:
- Domain validation
- Error wrapping with context
- Logging for operations
Example improvement:
func (s *Service) Create(ctx context.Context, asset *domain.Asset) error { + if err := asset.Validate(); err != nil { + return fmt.Errorf("invalid asset: %w", err) + } + + log.Info("creating asset", zap.String("id", asset.ID.String())) + if err := s.repo.Create(ctx, asset); err != nil { - return err + return fmt.Errorf("failed to create asset: %w", err) } + + log.Info("asset created", zap.String("id", asset.ID.String())) return nil }Apply similar improvements to other methods.
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (5)
asset/domain/build.go (3)
8-12
: Consider adding error documentation.The error variables are well-defined but lack documentation comments explaining when these errors occur.
Add documentation for each error variable:
+// ErrEmptyWorkspaceID is returned when attempting to build an Asset without a workspace ID var ErrEmptyWorkspaceID = errors.New("workspace id is required") +// ErrEmptyURL is returned when attempting to build an Asset without a URL var ErrEmptyURL = errors.New("url is required") +// ErrEmptySize is returned when attempting to build an Asset with size <= 0 var ErrEmptySize = errors.New("size must be greater than 0")
22-44
: Consider adding validation for URL format.The Build method validates required fields but doesn't check if the URL is properly formatted.
Consider adding URL validation:
+import "net/url" + +var ErrInvalidURL = errors.New("invalid url format") func (b *Builder) Build() (*Asset, error) { if b.a.id.IsNil() { return nil, ErrInvalidID } if b.a.workspaceID.IsNil() { return nil, ErrEmptyWorkspaceID } if b.a.url == "" { return nil, ErrEmptyURL } + if _, err := url.Parse(b.a.url); err != nil { + return nil, ErrInvalidURL + } if b.a.size <= 0 { return nil, ErrEmptySize }
46-52
: Consider adding error wrapping in MustBuild.The MustBuild method could provide more context when panicking.
Consider wrapping the error:
func (b *Builder) MustBuild() *Asset { r, err := b.Build() if err != nil { - panic(err) + panic(fmt.Errorf("failed to build asset: %w", err)) } return r }asset/domain/build_test.go (1)
130-136
: Consider using a fixed time for tests.The test relies on runtime-generated timestamps, which could make tests non-deterministic.
Consider using a fixed time for testing:
+ fixedTime := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC) + oldTimeNow := timeNow + timeNow = func() time.Time { return fixedTime } + defer func() { timeNow = oldTimeNow }() if tt.want.createdAt.IsZero() { - assert.False(t, got.createdAt.IsZero()) - assert.False(t, got.updatedAt.IsZero()) - tt.want.createdAt = got.createdAt - tt.want.updatedAt = got.updatedAt + tt.want.createdAt = fixedTime + tt.want.updatedAt = fixedTime }asset/pubsub/pubsub.go (1)
9-20
: Consider adding event type validation.The EventType is a string type without validation, which could allow invalid event types.
Consider adding validation:
type EventType string +// IsValid checks if the event type is valid +func (e EventType) IsValid() bool { + switch e { + case EventTypeAssetCreated, EventTypeAssetUpdated, + EventTypeAssetDeleted, EventTypeAssetUploaded, + EventTypeAssetExtracted, EventTypeAssetTransferred: + return true + } + return false +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
asset/domain/build.go
(1 hunks)asset/domain/build_test.go
(1 hunks)asset/pubsub/pubsub.go
(1 hunks)asset/pubsub/pubsub_test.go
(1 hunks)asset/service/service.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- asset/service/service.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
asset/domain/build_test.go
186-186: Error return value of (*github.com/reearth/reearthx/asset/domain.Builder).MustBuild
is not checked
(errcheck)
asset/pubsub/pubsub_test.go
Outdated
func (m *mockPublisher) Publish(ctx context.Context, topic string, msg interface{}) error { | ||
m.published = append(m.published, mockPublishedEvent{topic: topic, msg: msg}) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider testing error scenarios in mock publisher.
The mock publisher always returns nil, making it impossible to test error handling in the pubsub package.
Add error simulation to the mock:
type mockPublisher struct {
published []mockPublishedEvent
+ err error // Add error field for testing error scenarios
}
func (m *mockPublisher) Publish(ctx context.Context, topic string, msg interface{}) error {
+ if m.err != nil {
+ return m.err
+ }
m.published = append(m.published, mockPublishedEvent{topic: topic, msg: msg})
return nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (m *mockPublisher) Publish(ctx context.Context, topic string, msg interface{}) error { | |
m.published = append(m.published, mockPublishedEvent{topic: topic, msg: msg}) | |
return nil | |
} | |
type mockPublisher struct { | |
published []mockPublishedEvent | |
err error // Add error field for testing error scenarios | |
} | |
func (m *mockPublisher) Publish(ctx context.Context, topic string, msg interface{}) error { | |
if m.err != nil { | |
return m.err | |
} | |
m.published = append(m.published, mockPublishedEvent{topic: topic, msg: msg}) | |
return nil | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
asset/repository/pubsub_repository.go (2)
23-30
: Consider adding validation for required fields in AssetEvent structThe
AssetEvent
struct contains several optional fields marked withomitempty
, but there's no clear indication of which fields are required for different event types. For example,AssetID
should likely be required for all events.Consider adding validation or documentation to clarify field requirements for each event type. You could also use custom JSON marshaling to enforce required fields:
type AssetEvent struct { - Type EventType `json:"type"` - AssetID domain.ID `json:"asset_id"` + Type EventType `json:"type" validate:"required"` + AssetID domain.ID `json:"asset_id" validate:"required"` WorkspaceID domain.WorkspaceID `json:"workspace_id,omitempty"` ProjectID domain.ProjectID `json:"project_id,omitempty"` Status domain.Status `json:"status,omitempty"` Error string `json:"error,omitempty"` }
33-51
: Consider adding context timeout recommendationsThe interface methods accept a context but don't provide guidance on recommended timeout values for different operations.
Consider adding documentation comments to suggest appropriate timeout values for each operation. For example:
type PubSubRepository interface { - // PublishAssetCreated publishes an asset created event + // PublishAssetCreated publishes an asset created event. + // Recommended context timeout: 5 seconds PublishAssetCreated(ctx context.Context, asset *domain.Asset) error // ... similar for other methods }asset/infrastructure/pubsub/pubsub.go (1)
11-13
: Consider adding error types for Publisher interfaceThe
Publisher
interface'sPublish
method returns a generic error, making it difficult for callers to handle specific error cases.Consider defining specific error types for common publishing failures:
type PublishError struct { Type string Message string Cause error } type Publisher interface { Publish(ctx context.Context, topic string, msg interface{}) *PublishError }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
asset/infrastructure/pubsub/pubsub.go
(1 hunks)asset/infrastructure/pubsub/pubsub_test.go
(1 hunks)asset/repository/pubsub_repository.go
(1 hunks)
2945b95
to
51af443
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
asset/infrastructure/pubsub/pubsub.go (5)
81-98
:⚠️ Potential issueAdd nil pointer check for the
asset
parameterIn
PublishAssetCreated
, theasset
parameter is dereferenced without checking fornil
. This could lead to a panic ifnil
is passed to the function.Apply this diff to add a nil check:
func (p *AssetPubSub) PublishAssetCreated(ctx context.Context, asset *domain.Asset) error { + if asset == nil { + return fmt.Errorf("asset cannot be nil") + } event := repository.AssetEvent{ Type: repository.EventTypeAssetCreated, AssetID: asset.ID(), WorkspaceID: asset.WorkspaceID(), ProjectID: asset.ProjectID(), Status: asset.Status(), Error: asset.Error(), }
101-118
:⚠️ Potential issueAdd nil pointer check for the
asset
parameterIn
PublishAssetUpdated
, theasset
parameter is dereferenced without checking fornil
. This could lead to a panic ifnil
is passed to the function.Apply this diff to add a nil check:
func (p *AssetPubSub) PublishAssetUpdated(ctx context.Context, asset *domain.Asset) error { + if asset == nil { + return fmt.Errorf("asset cannot be nil") + } event := repository.AssetEvent{ Type: repository.EventTypeAssetUpdated, AssetID: asset.ID(), WorkspaceID: asset.WorkspaceID(), ProjectID: asset.ProjectID(), Status: asset.Status(), Error: asset.Error(), }
137-154
:⚠️ Potential issueAdd nil pointer check for the
asset
parameterIn
PublishAssetUploaded
, theasset
parameter is dereferenced without checking fornil
. This could lead to a panic ifnil
is passed to the function.Apply this diff to add a nil check:
func (p *AssetPubSub) PublishAssetUploaded(ctx context.Context, asset *domain.Asset) error { + if asset == nil { + return fmt.Errorf("asset cannot be nil") + } event := repository.AssetEvent{ Type: repository.EventTypeAssetUploaded, AssetID: asset.ID(), WorkspaceID: asset.WorkspaceID(), ProjectID: asset.ProjectID(), Status: asset.Status(), Error: asset.Error(), }
157-174
:⚠️ Potential issueAdd nil pointer check for the
asset
parameterIn
PublishAssetExtracted
, theasset
parameter is dereferenced without checking fornil
. This could lead to a panic ifnil
is passed to the function.Apply this diff to add a nil check:
func (p *AssetPubSub) PublishAssetExtracted(ctx context.Context, asset *domain.Asset) error { + if asset == nil { + return fmt.Errorf("asset cannot be nil") + } event := repository.AssetEvent{ Type: repository.EventTypeAssetExtracted, AssetID: asset.ID(), WorkspaceID: asset.WorkspaceID(), ProjectID: asset.ProjectID(), Status: asset.Status(), Error: asset.Error(), }
177-194
:⚠️ Potential issueAdd nil pointer check for the
asset
parameterIn
PublishAssetTransferred
, theasset
parameter is dereferenced without checking fornil
. This could lead to a panic ifnil
is passed to the function.Apply this diff to add a nil check:
func (p *AssetPubSub) PublishAssetTransferred(ctx context.Context, asset *domain.Asset) error { + if asset == nil { + return fmt.Errorf("asset cannot be nil") + } event := repository.AssetEvent{ Type: repository.EventTypeAssetTransferred, AssetID: asset.ID(), WorkspaceID: asset.WorkspaceID(), ProjectID: asset.ProjectID(), Status: asset.Status(), Error: asset.Error(), }
🧹 Nitpick comments (8)
asset/domain/build_test.go (4)
16-141
: Consider grouping test cases by scenario type.The test cases are comprehensive but could be better organized. Consider grouping them into categories:
tests := []struct { name string build func() *Builder want *Asset wantErr error }{ // Success cases { name: "success with all fields", // ... }, { name: "success with minimal required fields", // ... }, // Validation error cases { name: "error: invalid id", // ... }, // ... }
203-207
: Consider adding uniqueness validation.While the test verifies non-nil and valid IDs, consider adding a check for uniqueness:
func TestBuilder_NewID(t *testing.T) { b1 := New().NewID() b2 := New().NewID() assert.NotNil(t, b1.a.id) assert.NotNil(t, b2.a.id) assert.False(t, b1.a.id.IsNil()) assert.False(t, b2.a.id.IsNil()) assert.NotEqual(t, b1.a.id, b2.a.id, "Generated IDs should be unique") }
209-337
: Consider adding edge cases for setters.While the setter tests are comprehensive for valid inputs, consider adding edge cases such as:
- Empty strings for Name, URL, ContentType
- Zero or negative values for Size
- Nil values for ID fields
- Invalid status values
- Empty error messages
This would help ensure robust validation in the Builder implementation.
1-337
: Consider extracting common test helpers.The test file could benefit from helper functions for common operations:
- Creating test assets with default values
- Generating test IDs
- Validating common asset properties
This would reduce duplication and make the tests more maintainable.
Example helper:
func createTestAsset(t *testing.T) (*Builder, ID, WorkspaceID) { id := NewID() wid := NewWorkspaceID() return New(). ID(id). WorkspaceID(wid). URL("https://example.com/test.txt"). Size(100), id, wid }asset/service/service.go (3)
13-27
: Consider injecting the decompressor dependencyThe constructor creates a new
ZipDecompressor
internally. Consider accepting it as a parameter to:
- Follow dependency inversion principle
- Make testing easier with mock decompressors
- Allow for different decompressor implementations
-func NewService(repo repository.PersistenceRepository, pubsub repository.PubSubRepository) *Service { +func NewService( + repo repository.PersistenceRepository, + decompressor repository.Decompressor, + pubsub repository.PubSubRepository, +) *Service { return &Service{ repo: repo, - decompressor: decompress.NewZipDecompressor(), + decompressor: decompressor, pubsub: pubsub, } }
29-40
: Consider exposing publish errors to callersThe method logs publish errors but doesn't return them. This could hide important failures from callers who might need to handle these cases.
Consider either:
- Returning both errors:
-func (s *Service) Create(ctx context.Context, asset *domain.Asset) error { +func (s *Service) Create(ctx context.Context, asset *domain.Asset) (repoErr, pubErr error) { if err := s.repo.Create(ctx, asset); err != nil { return err } - if err := s.pubsub.PublishAssetCreated(ctx, asset); err != nil { - log.Errorfc(ctx, "failed to publish asset created event: %v", err) - } - return nil + return nil, s.pubsub.PublishAssetCreated(ctx, asset) }
- Or adding a configuration option to fail on publish errors.
73-89
: Optimize Upload to avoid extra Read callThe method makes an additional Read call after Upload to get the asset for publishing. This creates an extra database round trip.
Consider modifying the repository interface to return the updated asset:
-func (s *Service) Upload(ctx context.Context, id domain.ID, content io.Reader) error { +func (s *Service) Upload(ctx context.Context, id domain.ID, content io.Reader) error { - if err := s.repo.Upload(ctx, id, content); err != nil { + asset, err := s.repo.Upload(ctx, id, content) + if err != nil { return err } - asset, err := s.repo.Read(ctx, id) - if err != nil { - return err - } if err := s.pubsub.PublishAssetUploaded(ctx, asset); err != nil { log.Errorfc(ctx, "failed to publish asset uploaded event: %v", err) } return nil }asset/infrastructure/pubsub/pubsub_test.go (1)
35-72
: Improve test robustness and cleanupConsider these improvements to the test:
- Verify event order using timestamps or sequence numbers
- Add cleanup to prevent subscription leaks
- Add timeout to prevent test hangs
func TestAssetPubSub_Subscribe(t *testing.T) { + t.Cleanup(func() { + // Cleanup subscriptions + }) + ps := NewAssetPubSub(&mockPublisher{}, "test-topic") var receivedEvents []repository.AssetEvent var mu sync.Mutex + done := make(chan struct{}) // Subscribe to all events ps.Subscribe("*", func(ctx context.Context, event repository.AssetEvent) { mu.Lock() receivedEvents = append(receivedEvents, event) + if len(receivedEvents) == 3 { + close(done) + } mu.Unlock() }) // ... rest of the test ... + // Wait with timeout + select { + case <-done: + case <-time.After(5 * time.Second): + t.Fatal("test timed out") + } + // Check received events mu.Lock() defer mu.Unlock()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
account/accountusecase/accountinteractor/user.go
(1 hunks)asset/domain/build_test.go
(1 hunks)asset/graphql/resolver.go
(1 hunks)asset/infrastructure/pubsub/pubsub.go
(1 hunks)asset/infrastructure/pubsub/pubsub_test.go
(1 hunks)asset/repository/pubsub_repository.go
(1 hunks)asset/service/service.go
(1 hunks)
🔇 Additional comments (9)
asset/domain/build_test.go (2)
10-14
: LGTM! Well-structured initialization test.The test effectively verifies both the Builder creation and its internal state initialization.
143-201
: LGTM! Proper panic testing and error handling.The test effectively verifies both successful builds and panic scenarios, with appropriate error handling and the correct use of the nolint directive.
account/accountusecase/accountinteractor/user.go (2)
Line range hint
84-101
: Well-structured error handling pattern!The error handling flow in
GetUserByCredentials
follows security best practices:
- Distinguishes between system errors and user-related errors
- Performs password validation in a timing-safe manner
- Checks user verification status after authentication
85-85
: Correct error handling improvement!The change fixes the argument order in
errors.Is()
to align with Go's standard library expectations. The correct order iserrors.Is(err, target)
, whereerr
is the error being checked andtarget
is the error to match against.Let's verify if there are any other instances of incorrect
errors.Is
usage in the codebase:✅ Verification successful
Verified: Error handling is consistent across the codebase
All instances of
errors.Is()
in the codebase follow the correct argument order:errors.Is(err, targetError)
. The change in the reviewed file aligns with the established pattern used throughout the project.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for potential incorrect usage of errors.Is where ErrNotFound might be the first argument rg "errors\.Is\(\s*.*ErrNotFound\s*,.*\)" --type goLength of output: 53
Script:
#!/bin/bash # First, let's find how the error package is imported and used rg "import.*\".*error" --type go -A 5 # Then find all errors.Is usage patterns rg "errors\.Is\(" --type go -A 1Length of output: 4893
asset/graphql/resolver.go (2)
8-11
: StructResolver
is properly definedThe
Resolver
struct correctly includes the necessary fieldsassetService
andpubsub
to handle asset-related GraphQL operations.
13-18
: Constructor functionNewResolver
is correctly implementedThe
NewResolver
function properly initializes theResolver
struct with the providedassetService
andpubsub
instances.asset/repository/pubsub_repository.go (1)
1-61
: Event and interface definitions are well-structuredThe definitions of
EventType
,AssetEvent
, and thePubSubRepository
interface are clear and align with the domain requirements. Using string constants for event types enhances readability and maintainability.asset/infrastructure/pubsub/pubsub_test.go (2)
13-25
: Add error simulation to mock publisherThe mock publisher always returns nil error, which doesn't test error handling paths.
142-260
: Add test cases for error scenariosThe test suite only covers successful publishing scenarios.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
asset/domain/asset.go (2)
38-60
: Consider reducing duplication in mock functions.The mock functions follow the same pattern and could be refactored using a helper function to reduce code duplication.
Example refactor:
+func mockIDGenerator[T any](original func() T, mockID T) func() { + originalFn := original + mockFn := func() T { return mockID } + original = mockFn + return func() { original = originalFn } +} + func MockNewID(i ID) func() { - original := NewID - NewID = func() ID { return i } - return func() { NewID = original } + return mockIDGenerator(&NewID, i) } func MockNewGroupID(i GroupID) func() { - original := NewGroupID - NewGroupID = func() GroupID { return i } - return func() { NewGroupID = original } + return mockIDGenerator(&NewGroupID, i) }
71-84
: Consider adding size validation.The Asset struct should validate that the size field is non-negative during creation and updates.
Example validation in NewAsset:
func NewAsset(id ID, name string, size int64, contentType string) *Asset { + if size < 0 { + size = 0 + } now := time.Now() return &Asset{ id: id, name: name, size: size, contentType: contentType, status: StatusPending, createdAt: now, updatedAt: now, } }asset/domain/asset_test.go (2)
10-23
: Add negative test cases for NewAsset.Consider adding test cases for invalid inputs:
- Negative size
- Empty name
- Invalid content type
25-35
: Add test cases for invalid status updates.Consider adding test cases for:
- Invalid status values
- Empty status
- Status transitions
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
asset/domain/asset.go
(1 hunks)asset/domain/asset_test.go
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: CI
🔇 Additional comments (6)
asset/domain/asset.go (4)
1-37
: LGTM! Well-structured type declarations and helper functions.The type aliases and helper functions provide a clean abstraction layer and consistent error handling patterns.
62-69
: LGTM! Clear and descriptive status definitions.The status constants are well-defined and follow a consistent naming pattern.
86-112
: LGTM! Well-implemented constructor and getters.The constructor properly initializes all fields and the getters provide complete access to the asset's state.
132-140
: Add workspace and project ID validation.The move operations should validate the provided IDs before updating.
asset/domain/asset_test.go (2)
37-57
: LGTM! Comprehensive metadata update tests.Good coverage of both full and partial updates.
119-157
: LGTM! Well-implemented mock tests.The mock tests properly verify both the mocking and cleanup functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🧹 Nitpick comments (9)
asset/domain/group.go (1)
13-20
: Consider adding field validationThe Group struct should validate maximum lengths for string fields to prevent potential issues with storage or display.
Consider adding constants and validation for maximum field lengths:
+const ( + MaxGroupNameLength = 255 + MaxGroupPolicyLength = 1024 + MaxGroupDescriptionLength = 1024 +) type Group struct { id GroupID name string policy string description string createdAt time.Time updatedAt time.Time }asset/domain/group_test.go (1)
23-37
: Avoid time.Sleep in testsUsing
time.Sleep
makes tests fragile and potentially slow. Consider using a controlled clock for testing time-based functionality.Consider using a clock interface:
+type Clock interface { + Now() time.Time +} +type realClock struct{} + +func (realClock) Now() time.Time { return time.Now() } type Group struct { // ... other fields ... + clock Clock } -func NewGroup(id GroupID, name string) *Group { +func NewGroup(id GroupID, name string, clock Clock) *Group { // ... implementation using clock.Now() ... }Then in tests:
type mockClock struct { current time.Time } func (m *mockClock) Now() time.Time { m.current = m.current.Add(time.Second) return m.current }asset/service/group_service.go (1)
28-32
: Reconsider dummy asset creation patternCreating dummy assets for event publishing seems like a workaround. Consider creating dedicated event types for group operations.
Consider introducing group-specific events:
type GroupEvent interface { IsGroupEvent() } type GroupCreatedEvent struct { GroupID domain.GroupID Name string Timestamp time.Time } func (e GroupCreatedEvent) IsGroupEvent() {}Also applies to: 46-50, 60-64
asset/repository/persistence_repository.go (3)
10-14
: Add method for batch reading by IDs.Consider adding a
ReadMany
orReadByIDs
method to efficiently fetch multiple assets by their IDs in a single operation.type Reader interface { Read(ctx context.Context, id domain.ID) (*domain.Asset, error) List(ctx context.Context) ([]*domain.Asset, error) FindByGroup(ctx context.Context, groupID domain.GroupID) ([]*domain.Asset, error) + ReadByIDs(ctx context.Context, ids []domain.ID) ([]*domain.Asset, error) }
22-26
: Add method for batch upload operations.Consider adding methods for:
- Batch upload operations
- Upload progress tracking
- Cancellation support
Example interface extension:
type FileOperator interface { Upload(ctx context.Context, id domain.ID, content io.Reader) error UploadWithProgress(ctx context.Context, id domain.ID, content io.Reader, progress chan<- int64) error BatchUpload(ctx context.Context, uploads map[domain.ID]io.Reader) error Download(ctx context.Context, id domain.ID) (io.ReadCloser, error) GetUploadURL(ctx context.Context, id domain.ID) (string, error) }
1-32
: Add package documentation.Consider adding package documentation to explain:
- The purpose and responsibilities of each interface
- The relationship between interfaces
- Expected error types and handling
- Usage examples
Example:
/* Package repository provides interfaces for asset persistence and management. The package defines four main interfaces: - Reader: For reading asset data - Writer: For writing asset data - FileOperator: For handling asset file operations - PersistenceRepository: Combines all asset operations Example usage: repo := NewAssetRepository() asset, err := repo.Read(ctx, assetID) if err != nil { // Handle error } */ package repositoryasset/service/service.go (2)
49-58
: Enhance error handling in the Update methodIn the
Update
method, errors from the repository are returned directly. Consider distinguishing between different error types (e.g., not found, validation errors) to provide more meaningful feedback to the caller and improve error handling.
150-155
: Continue deletion despite errors inDeleteAllInGroup
In the
DeleteAllInGroup
method, if an error occurs while deleting an asset, the function returns immediately. This means that subsequent assets in the group are not attempted for deletion.Consider modifying the method to attempt deletion of all assets, collecting any errors, so that one failure doesn't prevent the deletion of other assets.
Apply this diff to collect errors and continue processing:
func (s *Service) DeleteAllInGroup(ctx context.Context, groupID domain.GroupID) error { assets, err := s.repo.FindByGroup(ctx, groupID) if err != nil { return err } + var errs []error for _, asset := range assets { if err := s.Delete(ctx, asset.ID()); err != nil { log.Errorfc(ctx, "failed to delete asset %s in group %s: %v", asset.ID(), groupID, err) - return err + errs = append(errs, err) } } + if len(errs) > 0 { + return fmt.Errorf("errors occurred while deleting assets: %v", errs) + } return nil }asset/graphql/schema.resolvers.go (1)
123-141
: Handle errors individually when deleting multiple assetsIn
DeleteAssets
, if an error occurs while deleting an asset, the function returns immediately, potentially leaving some assets undeleted. Consider modifying the logic to attempt deletion of all assets, collecting any errors to report back, ensuring all deletion attempts are made.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (12)
asset/domain/group.go
(1 hunks)asset/domain/group_test.go
(1 hunks)asset/graphql/model.go
(1 hunks)asset/graphql/schema.graphql
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)asset/repository/group_repository.go
(1 hunks)asset/repository/persistence_repository.go
(1 hunks)asset/service/group_service.go
(1 hunks)asset/service/group_service_test.go
(1 hunks)asset/service/service.go
(1 hunks)go.mod
(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- asset/infrastructure/gcs/client.go
- asset/graphql/model.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: CI
🔇 Additional comments (13)
go.mod (5)
51-68
: Review new indirect dependenciesNew indirect dependencies have been added, particularly Google Cloud-related packages. These additions suggest integration with Google Cloud services, which aligns with the PR's objective of implementing file system repository functionality.
143-148
: Review Google API dependenciesThe Google API and related protobuf dependencies have been updated to recent versions. This is consistent with the cloud storage integration requirements.
71-73
: Verify cloud storage dependency versionsThe addition of
cloud.google.com/go/storage v1.49.0
is appropriate for implementing cloud storage functionality. However, let's verify if this is the latest stable version:
32-32
: Verify compatibility with updated direct dependenciesSeveral critical direct dependencies have been updated to newer versions:
testify
v1.9.0- OpenTelemetry libraries v1.29.0
crypto
v0.31.0text
v0.21.0Let's verify if these updates introduce any breaking changes:
Also applies to: 38-39, 42-42, 44-44
✅ Verification successful
Dependencies updated safely, no breaking changes detected
The dependency updates are safe to apply:
testify
v1.9.0 includes bug fixes and improvements with backward compatibilityopentelemetry-go
v1.29.0 maintains compatibility with Go 1.21crypto
andtext
updates follow Go 1 compatibility promise🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any breaking changes in the updated dependencies # Check git history for any breaking changes for repo in "stretchr/testify" "open-telemetry/opentelemetry-go" "golang/crypto" "golang/text"; do echo "Checking $repo for breaking changes..." gh api repos/$repo/releases --jq '.[] | select(.tag_name | contains("v1.9.0","v1.29.0","v0.31.0","v0.21.0")) | .body' doneLength of output: 33746
130-133
: Ensure OpenTelemetry instrumentation compatibilityThe OpenTelemetry instrumentation packages have been updated to v0.54.0. These versions should be compatible with the core OpenTelemetry packages at v1.29.0.
✅ Verification successful
OpenTelemetry versions are correctly aligned
The versioning is consistent with OpenTelemetry's versioning scheme where core packages use v1.x.x and instrumentation packages use v0.x.x. All core packages are aligned at v1.29.0, and the instrumentation packages at v0.54.0 are compatible with this version.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify OpenTelemetry version compatibility # Check version compatibility matrix curl -s https://pkg.go.dev/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected] | grep -A 5 "Required"Length of output: 133
Script:
#!/bin/bash # Check all OpenTelemetry related dependencies and their versions echo "=== OpenTelemetry dependencies in go.mod ===" rg "go\.opentelemetry\." go.mod # Look for any version requirements or constraints in Go files echo -e "\n=== OpenTelemetry version requirements in code ===" rg -i "require.*opentelemetry" --type goLength of output: 818
asset/repository/group_repository.go (1)
9-24
: Well-designed repository interfaces!The interfaces follow the Interface Segregation Principle by separating read and write operations. The method signatures are clear and consistent with Go best practices.
asset/domain/group.go (1)
41-62
: 🛠️ Refactor suggestionProtect against concurrent modifications
The Group methods modify shared state (updatedAt) without synchronization, which could lead to race conditions in concurrent usage.
Let's verify if the Group is used concurrently:
Consider adding mutex protection:
type Group struct { + mu sync.RWMutex id GroupID name string policy string description string createdAt time.Time updatedAt time.Time } func (g *Group) UpdateName(name string) error { + g.mu.Lock() + defer g.mu.Unlock() if name == "" { return ErrEmptyGroupName } g.name = name g.updatedAt = time.Now() return nil }asset/service/service.go (1)
106-132
: RefactorDecompressZip
to separate concernsThe
DecompressZip
method mixes decompression logic with asset status management and uses context to passassetID
. This can make testing and maintenance more difficult.Consider refactoring to separate these concerns:
- Remove dependency on context for passing
assetID
.- Split the method into two: one for decompression and one for updating asset status.
asset/graphql/schema.resolvers.go (3)
29-36
: Ensure cleanup on upload failure inUploadAsset
If the file upload fails after creating the asset metadata, the asset remains in the repository with incomplete data. To prevent orphaned assets, add a cleanup mechanism to delete the asset metadata if the upload fails.
81-103
: Avoid unconditional status update to Active inUpdateAssetMetadata
The
UpdateAssetMetadata
method sets the asset's status toActive
without checking the current status. This might interfere with ongoing operations or status transitions. Consider verifying if transitioning toActive
is appropriate before updating the status.
145-178
: Validate input inMoveAsset
to ensure destination is providedThe
MoveAsset
method does not enforce that at least one oftoWorkspaceId
ortoProjectId
is provided. If both are nil, the asset's location remains unchanged, which may not be the intended behavior. Add validation to ensure at least one destination ID is provided.asset/graphql/schema.graphql (2)
70-75
: Add validation constraints toUpdateAssetMetadataInput
Currently, there are no validation constraints on the fields in
UpdateAssetMetadataInput
. Consider adding constraints to ensure:
name
is not empty and has a maximum length.size
is a non-negative integer with an upper limit (e.g., maximum file size).contentType
matches a valid MIME type pattern.
97-101
: Enforce at least one destination inMoveAssetInput
In
MoveAssetInput
, bothtoWorkspaceId
andtoProjectId
are optional. Without at least one, the move operation doesn't affect the asset's location. Add validation to ensure that at least one of these fields is provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
asset/domain/build_test.go (4)
10-14
: Consider a more descriptive test name.While the test is well-structured, consider renaming it to better describe what it's testing, e.g.,
TestNewAssetBuilder_ShouldCreateNonNilBuilder
.
16-141
: Consider adding more edge cases.The test suite is comprehensive but could benefit from additional edge cases:
- Maximum size limits
- Special characters in URLs
- Various content types
- Different status transitions
203-207
: Consider adding uniqueness checks.While the test verifies non-nil and non-zero conditions, consider adding checks to ensure that multiple calls to NewID() generate unique IDs.
func TestAssetBuilder_NewID(t *testing.T) { ids := make(map[ID]bool) for i := 0; i < 1000; i++ { b := NewAssetBuilder().NewID() assert.NotNil(t, b.a.id) assert.False(t, b.a.id.IsNil()) assert.False(t, ids[b.a.id], "Generated ID is not unique") ids[b.a.id] = true } }
209-337
: Consider adding tests for chained operations.While individual setters are well-tested, consider adding test cases that verify the behavior of chained operations to ensure they work correctly together.
Example:
{ name: "ChainedOperations", build: func() *AssetBuilder { return NewAssetBuilder(). ID(id). WorkspaceID(wid). Name("test.txt"). Size(100) }, check: func(t *testing.T, b *AssetBuilder) { assert.Equal(t, id, b.a.id) assert.Equal(t, wid, b.a.workspaceID) assert.Equal(t, "test.txt", b.a.name) assert.Equal(t, int64(100), b.a.size) }, }asset/domain/build.go (1)
123-125
: Avoid potential code duplication between buildersThe constructors
NewAssetBuilder
andNewGroupBuilder
are very similar. Consider refactoring to reduce duplication, possibly by implementing a generic builder pattern or using embedding.This can enhance maintainability and reduce the risk of inconsistencies between builders.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
asset/domain/build.go
(1 hunks)asset/domain/build_test.go
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: GitGuardian Security Checks
🔇 Additional comments (4)
asset/domain/build_test.go (2)
1-8
: LGTM! Clean package structure and imports.The package structure follows Go best practices, and the imports are minimal and necessary.
143-201
: LGTM! Well-structured panic tests.The test cases effectively cover both successful builds and panic scenarios. The errcheck warning has been properly addressed with the nolint directive.
asset/domain/build.go (2)
46-52
: Review the use ofpanic
inMustBuild
methodsThe
MustBuild
method panics if there is an error during the build process. Ensure that usingpanic
is appropriate in this context and aligns with the application's error-handling strategy.If panicking is not suitable, consider modifying
MustBuild
to handle errors differently, perhaps by returning an error or logging it.
40-42
: EnsureStatusPending
is definedThe code sets
b.a.status
toStatusPending
if it's empty. Please verify that the constantStatusPending
is defined in the codebase.If
StatusPending
is not defined, define it as follows:const StatusPending Status = "pending"✅ Verification successful
StatusPending
constant is already definedThe constant is properly defined in
asset/domain/asset.go
asStatusPending Status = "PENDING"
and is being used correctly in the codebase.🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash # Search for StatusPending constant declarations ast-grep --pattern 'const StatusPending = $_' # Search for Status type definitions ast-grep --pattern 'type Status $_' # Search for any occurrence of StatusPending rg "StatusPending" -A 2 -B 2 # Search specifically in the domain package rg "StatusPending" "asset/domain/" -A 2 -B 2Length of output: 3936
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (4)
asset/domain/asset.go (3)
113-117
: 🛠️ Refactor suggestionAdd validation for status in
UpdateStatus
methodThe
UpdateStatus
method sets the status without validating if the provided status is one of the defined constants (StatusPending
,StatusActive
,StatusExtracting
,StatusError
). Consider adding validation to ensure only valid statuses are assigned.Apply this change:
func (a *Asset) UpdateStatus(status Status, err string) { + if !isValidStatus(status) { + return + } a.status = status a.error = err a.updatedAt = time.Now() } +func isValidStatus(s Status) bool { + switch s { + case StatusPending, StatusActive, StatusExtracting, StatusError: + return true + default: + return false + } +}
132-135
: 🛠️ Refactor suggestionValidate
WorkspaceID
before moving assetThe
MoveToWorkspace
method updatesworkspaceID
without verifying that the providedWorkspaceID
exists. To maintain domain integrity, consider adding a validation step or error handling to confirm the workspace exists before moving the asset.
137-140
: 🛠️ Refactor suggestionValidate
ProjectID
before moving assetThe
MoveToProject
method updatesprojectID
without verifying that the providedProjectID
exists. To ensure consistency, consider adding validation or error handling to confirm the project exists before moving the asset.asset/graphql/schema.resolvers.go (1)
14-47
: 🛠️ Refactor suggestionEnsure cleanup on failure in
UploadAsset
In
UploadAsset
, if an error occurs during file content upload, the asset metadata remains in the repository without the corresponding content. This can lead to orphaned asset records.Consider implementing a cleanup mechanism to remove the asset metadata if the upload fails:
Apply this change:
func (r *mutationResolver) UploadAsset(ctx context.Context, input UploadAssetInput) (*UploadAssetPayload, error) { id, err := domain.IDFrom(input.ID) if err != nil { return nil, err } // Create asset metadata asset := domain.NewAsset( id, input.File.Filename, input.File.Size, input.File.ContentType, ) // Create asset metadata first if err := r.assetUsecase.CreateAsset(ctx, asset); err != nil { return nil, err } + // Ensure cleanup on failure + success := false + defer func() { + if !success { + _ = r.assetUsecase.DeleteAsset(ctx, id) + } + }() // Upload file content if err := r.assetUsecase.UploadAssetContent(ctx, id, FileFromUpload(&input.File)); err != nil { return nil, err } // Update asset status to active asset.UpdateStatus(domain.StatusActive, "") if err := r.assetUsecase.UpdateAsset(ctx, asset); err != nil { return nil, err } + success = true return &UploadAssetPayload{ Asset: AssetFromDomain(asset), }, nil }
🧹 Nitpick comments (9)
asset/domain/asset.go (3)
86-97
: Add input validation inNewAsset
constructorThe
NewAsset
function creates a new asset without validating input parameters such asname
,size
, andcontentType
. Consider adding validation to ensure that:
name
is not empty and meets any required naming conventions.size
is a non-negative value.contentType
is a valid MIME type.This will enhance data integrity and prevent potential issues with invalid asset data.
119-130
: Add validation inUpdateMetadata
methodThe
UpdateMetadata
method updates asset fields without validating the input values. Consider adding validation to ensure:
name
is not empty and follows naming conventions.url
is a valid URL format if provided.contentType
is a valid MIME type.
143-146
: Add validation forsize
inSetSize
methodThe
SetSize
method sets the size of the asset without validating the input. Consider adding validation to ensure thesize
is a non-negative value.asset/usecase/interactor/interactor.go (2)
112-126
: Use typed keys for context values to avoid key collisionsIn
DecompressZipContent
, you're accessing context values with a plain string key"assetID"
. Using untyped string keys can lead to collisions and unintended behavior, especially in larger applications.Consider defining a custom type for context keys:
type contextKey string const assetIDKey contextKey = "assetID" // Usage if assetID, ok := ctx.Value(assetIDKey).(domain.ID); ok { // ... }This approach prevents key collisions and improves code safety.
145-149
: Continue deleting assets even if one deletion failsIn
DeleteAllAssetsInGroup
, if deleting one asset fails, the function returns the error immediately, stopping the deletion of remaining assets. This could leave the system in a partial state.Consider modifying the logic to attempt deletion of all assets and collect errors:
- Use a slice to collect errors for each failed deletion.
- After attempting all deletions, return a combined error or handle the failures accordingly.
Example modification:
var errs []error for _, asset := range assets { if err := i.DeleteAsset(ctx, asset.ID()); err != nil { log.Errorfc(ctx, "failed to delete asset %s in group %s: %v", asset.ID(), groupID, err) errs = append(errs, err) } } if len(errs) > 0 { return fmt.Errorf("failed to delete some assets: %v", errs) }asset/usecase/usecase.go (3)
29-30
: Consider buffered channels for zip operationsThe
DecompressZipContent
andCompressToZip
methods return unbuffered channels. For large zip files, this could lead to blocking if consumers don't read fast enough. Consider adding a buffer size parameter or implementing backpressure mechanisms.
21-23
: Add content type and size parameters for uploadThe
UploadAssetContent
method could benefit from additional parameters:
- Content type for proper MIME type handling
- Content size for validation and optimization
-UploadAssetContent(ctx context.Context, id domain.ID, content io.Reader) error +UploadAssetContent(ctx context.Context, id domain.ID, content io.Reader, contentType string, size int64) error
27-27
: Consider pagination for ListAssetsThe
ListAssets
method returns all assets at once, which could be problematic with a large number of assets. Consider implementing pagination or streaming.-ListAssets(ctx context.Context) ([]*domain.Asset, error) +ListAssets(ctx context.Context, offset int, limit int) ([]*domain.Asset, error)asset/graphql/resolver.go (1)
13-18
: Consider validation in constructorThe NewResolver constructor could validate its input parameters to ensure non-nil dependencies.
func NewResolver(assetUsecase assetusecase.Usecase, pubsub repository.PubSubRepository) *Resolver { + if assetUsecase == nil { + panic("assetUsecase must not be nil") + } + if pubsub == nil { + panic("pubsub must not be nil") + } return &Resolver{ assetUsecase: assetUsecase, pubsub: pubsub, } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
asset/domain/asset.go
(1 hunks)asset/graphql/resolver.go
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/usecase/interactor/interactor.go
(1 hunks)asset/usecase/usecase.go
(1 hunks)
🔇 Additional comments (4)
asset/graphql/schema.resolvers.go (1)
66-87
: Add validation for metadata fields inUpdateAssetMetadata
The
UpdateAssetMetadata
method updates asset metadata without validating the input values. Consider adding validation to ensure:
input.Name
is not empty and follows naming conventions.input.ContentType
is a valid MIME type.This will help maintain data integrity and prevent invalid data from being saved.
asset/usecase/usecase.go (1)
11-34
: LGTM on interface designThe interface is well-documented and provides a comprehensive set of operations for asset management. The use of context.Context for cancellation support and clear method signatures makes it easy to implement and maintain.
asset/graphql/resolver.go (2)
8-11
: LGTM on resolver structureThe Resolver struct follows good practices:
- Clear dependency injection
- Proper encapsulation of dependencies
1-18
: Verify GraphQL schema implementationThe resolver structure is set up correctly, but ensure that corresponding GraphQL schema definitions exist and align with the usecase interface methods.
✅ Verification successful
GraphQL schema implementation is complete and properly aligned
The GraphQL schema in
asset/graphql/schema.graphql
contains all necessary type definitions, inputs, queries, and mutations that correspond to the resolver structure. The implementation includes:
- Asset type definition
- 7 mutation operations (upload, delete, move, etc.)
- 2 query operations (Asset, Assets)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for GraphQL schema definitions rg -t graphql "type Asset" . rg -t graphql "input.*Asset" .Length of output: 127
Script:
#!/bin/bash # Find GraphQL schema files echo "=== GraphQL Schema Files ===" fd -e graphql -e gql -e graphqls # Search for Asset-related definitions in all files (without file type filtering) echo -e "\n=== Asset Type Definitions ===" rg "type\s+Asset" echo -e "\n=== Asset Input Definitions ===" rg "input\s+.*Asset" echo -e "\n=== Asset Query/Mutation Definitions ===" rg "(query|mutation).*Asset"Length of output: 6262
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #66 +/- ##
==========================================
+ Coverage 47.66% 48.57% +0.91%
==========================================
Files 97 107 +10
Lines 9323 10231 +908
==========================================
+ Hits 4444 4970 +526
- Misses 4561 4917 +356
- Partials 318 344 +26 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (8)
asset/domain/group_builder.go (3)
17-30
: Consider additional validations in Build method.While the current validations are good, consider adding:
- Maximum length check for name
- Policy format validation if there's a specific format requirement
- Validation for zero/far-future timestamps
32-38
: Document panic conditions in MustBuild.Add a comment documenting when this method panics to help API consumers make informed decisions about using Build vs MustBuild.
Example:
+// MustBuild builds the Group and returns it. It panics if the build fails due to: +// - Missing or invalid ID +// - Empty group name func (b *GroupBuilder) MustBuild() *Group {
65-73
: Consider defensive copying for time fields.Time values in Go are mutable. Consider making defensive copies to prevent external modifications:
func (b *GroupBuilder) CreatedAt(createdAt time.Time) *GroupBuilder { - b.g.createdAt = createdAt + b.g.createdAt = createdAt.UTC() return b } func (b *GroupBuilder) UpdatedAt(updatedAt time.Time) *GroupBuilder { - b.g.updatedAt = updatedAt + b.g.updatedAt = updatedAt.UTC() return b }asset/domain/group_test.go (1)
10-21
: Enhance test coverage for NewGroup.Consider adding test cases for:
- Group creation with empty name
- Group creation with invalid ID
- More precise time assertions using time ranges instead of NotZero
Example of time range assertion:
now := time.Now() g := NewGroup(id, "test-group") assert.True(t, g.CreatedAt().After(now.Add(-time.Second))) assert.True(t, g.CreatedAt().Before(now.Add(time.Second)))asset/domain/asset.go (2)
66-77
: Add validation inUpdateMetadata
methodThe
UpdateMetadata
method updates thename
,url
, andcontentType
fields without validating the input. Invalid or malicious data could be set inadvertently.Consider adding validation to ensure:
name
is within acceptable length and character constraints.url
is a well-formed URL.contentType
matches expected MIME types.Example:
func (a *Asset) UpdateMetadata(name, url, contentType string) { if name != "" { // Validate name a.name = name } if url != "" { // Validate URL format a.url = url } if contentType != "" { // Validate content type a.contentType = contentType } a.updatedAt = time.Now() }
33-43
: Refactor to reduce code duplication when updatingupdatedAt
The
updatedAt
field is being set totime.Now()
in multiple methods. This repetition can be error-prone if the logic for updating timestamps changes in the future.Consider creating a private method to update the timestamp:
+func (a *Asset) touch() { + a.updatedAt = time.Now() +} func (a *Asset) UpdateStatus(status Status, err string) { a.status = status a.error = err - a.updatedAt = time.Now() + a.touch() } // Apply the same refactoring in other methods where `updatedAt` is setAlso applies to: 60-92
asset/domain/build.go (2)
48-54
: Add documentation for panic behavior.The
MustBuild
method follows the panic-on-error pattern correctly, but it should be documented to warn callers about the panic behavior.Add a doc comment like this:
+// MustBuild calls Build and panics if an error occurs. +// It should only be used during initialization or when you are absolutely certain that the build will succeed. func (b *AssetBuilder) MustBuild() *Asset {
56-119
: Consider adding validation in critical setters.While deferring all validation to
Build()
is a valid approach, consider adding validation in critical setters likeSize
andURL
to fail fast. This could help catch issues earlier in the build chain.Example implementation for the
Size
setter:func (b *AssetBuilder) Size(size int64) *AssetBuilder { + if size <= 0 { + panic("size must be greater than 0") + } b.a.size = size return b }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
asset/domain/asset.go
(1 hunks)asset/domain/build.go
(1 hunks)asset/domain/group.go
(1 hunks)asset/domain/group_builder.go
(1 hunks)asset/domain/group_test.go
(1 hunks)asset/domain/id.go
(1 hunks)asset/domain/id/id.go
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/usecase/usecase.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- asset/usecase/usecase.go
- asset/domain/group.go
- asset/graphql/schema.resolvers.go
🔇 Additional comments (9)
asset/domain/group_builder.go (1)
9-15
: LGTM! Clean implementation of the builder pattern.The struct and constructor are well-designed, following the builder pattern best practices.
asset/domain/asset.go (3)
60-64
: Add validation for status inUpdateStatus
methodThe
UpdateStatus
method does not validate whether the providedstatus
is one of the predefined constants. This could lead to invalid or unintended status values being set.Consider adding validation to ensure only valid statuses are accepted:
+func isValidStatus(s Status) bool { + switch s { + case StatusPending, StatusActive, StatusExtracting, StatusError: + return true + default: + return false + } +} func (a *Asset) UpdateStatus(status Status, err string) { + if !isValidStatus(status) { + // Handle invalid status, possibly return an error or log + return + } a.status = status a.error = err a.updatedAt = time.Now() }
79-82
: Validate existence of Workspace before movingThe
MoveToWorkspace
method updates theworkspaceID
without verifying whether the provided ID corresponds to an existing workspace. This might lead to assets being associated with non-existent workspaces.Consider adding a check to confirm that the
workspaceID
exists before updating.
84-87
: Validate existence of Project before movingSimilarly, the
MoveToProject
method updates theprojectID
without verifying the existence of the project. Associating assets with non-existent projects could cause inconsistencies.Consider implementing a validation step to ensure the
projectID
is valid before proceeding.asset/domain/id/id.go (1)
1-42
: Code structure and implementation look goodThe ID definitions and associated methods are well-structured and follow best practices for type safety and uniqueness.
asset/domain/build.go (4)
1-8
: LGTM! Clean package structure and imports.The package declaration and imports are well-organized, following Go best practices.
10-14
: Declare missing error variables.The
ErrInvalidID
error is used in theBuild()
method but not declared in the error variables block.Apply this diff to declare the missing error variable:
var ( ErrEmptyWorkspaceID = errors.New("workspace id is required") ErrEmptyURL = errors.New("url is required") ErrEmptySize = errors.New("size must be greater than 0") + ErrInvalidID = errors.New("invalid id") )
16-22
: LGTM! Clean builder pattern implementation.The
AssetBuilder
struct and its constructor follow the builder pattern correctly with proper encapsulation.
24-46
: LGTM! Comprehensive validation in Build method.The validation logic and default value handling are well-implemented. The error messages are clear and descriptive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
♻️ Duplicate comments (9)
asset/infrastructure/pubsub/pubsub.go (2)
81-99
:⚠️ Potential issueAdd nil check and fix logging function.
The PublishAssetCreated method should:
- Validate that the asset parameter is not nil
- Use the correct logging function name
Apply this diff to add nil check and fix logging:
func (p *AssetPubSub) PublishAssetCreated(ctx context.Context, asset *entity.Asset) error { + if asset == nil { + return fmt.Errorf("asset cannot be nil") + } + event := repository.AssetEvent{ Type: repository.EventTypeAssetCreated, AssetID: asset.ID(), WorkspaceID: asset.WorkspaceID(), ProjectID: asset.ProjectID(), Status: string(asset.Status()), Error: asset.Error(), } if err := p.publisher.Publish(ctx, p.topic, event); err != nil { - log.Errorfc(ctx, "failed to publish asset created event: %v", err) + log.Errorf(ctx, "failed to publish asset created event: %v", err) return err } p.notify(ctx, event) return nil }
101-195
:⚠️ Potential issueAdd nil checks and fix logging functions in all publish methods.
The remaining publish methods have the same issues as
PublishAssetCreated
:
- Missing nil checks for asset parameters
- Using incorrect logging function names
The same fixes should be applied to:
PublishAssetUpdated
(lines 101-119)PublishAssetUploaded
(lines 137-155)PublishAssetExtracted
(lines 157-175)PublishAssetTransferred
(lines 177-195)asset/infrastructure/pubsub/pubsub_test.go (2)
23-39
: 🛠️ Refactor suggestionAdd error simulation to mock publisher.
The mock publisher always returns nil error, which doesn't test error handling paths in the AssetPubSub implementation.
Add error simulation to the mock publisher:
type mockPublisher struct { published []mockPublishedEvent + shouldError bool + errorMsg string } func (m *mockPublisher) Publish(ctx context.Context, topic string, msg interface{}) error { + if m.shouldError { + return errors.New(m.errorMsg) + } // Make a copy of the event to ensure it's not modified after storage if event, ok := msg.(repository.AssetEvent); ok { eventCopy := repository.AssetEvent{ Type: event.Type, AssetID: event.AssetID, WorkspaceID: event.WorkspaceID, ProjectID: event.ProjectID, Status: event.Status, Error: event.Error, } m.published = append(m.published, mockPublishedEvent{topic: topic, msg: eventCopy}) } else { m.published = append(m.published, mockPublishedEvent{topic: topic, msg: msg}) } return nil }
49-86
: 🛠️ Refactor suggestionAdd test cases for error scenarios.
The test suite only covers successful publishing scenarios.
Add error test cases to the test suite:
// Example additional test case { name: "handle nil asset", publish: func() error { return ps.PublishAssetCreated(ctx, nil) }, wantErr: true, }, { name: "handle context cancellation", publish: func() error { ctx, cancel := context.WithCancel(ctx) cancel() return ps.PublishAssetCreated(ctx, asset) }, wantErr: true, }asset/infrastructure/gcs/client_test.go (2)
91-96
:⚠️ Potential issueAdd nil checks to prevent potential panic.
The
Close()
method could panic if the object doesn't exist in the bucket.Add appropriate nil checks:
func (w *mockWriter) Close() error { obj := w.bucket.objects[w.objectName] + if obj == nil { + return fmt.Errorf("object not found: %s", w.objectName) + } obj.data = w.buf.Bytes() obj.attrs = w.attrs return nil }
262-270
:⚠️ Potential issueAdd prefix validation in DeleteAll method.
The
DeleteAll
method should validate the prefix to prevent accidental deletion of all objects.Add prefix validation:
func (c *testClient) DeleteAll(ctx context.Context, prefix string) error { + if prefix == "" { + return fmt.Errorf("prefix cannot be empty") + } + if strings.Contains(prefix, "..") { + return fmt.Errorf("prefix cannot contain path traversal") + } fullPrefix := path.Join(c.basePath, prefix) for name := range c.mockBucket.objects { if strings.HasPrefix(name, fullPrefix) { delete(c.mockBucket.objects, name) } } return nil }asset/graphql/schema.resolvers.go (3)
14-48
:⚠️ Potential issueAdd transaction handling and cleanup for upload failures.
The current implementation could leave orphaned metadata if the file upload fails.
Apply this diff to add cleanup on failure:
func (r *mutationResolver) UploadAsset(ctx context.Context, input UploadAssetInput) (*UploadAssetPayload, error) { assetID, err := id.IDFrom(input.ID) if err != nil { return nil, err } // Create asset metadata asset := entity.NewAsset( assetID, input.File.Filename, input.File.Size, input.File.ContentType, ) // Create asset metadata first if err := r.assetUsecase.CreateAsset(ctx, asset); err != nil { return nil, err } + // Ensure cleanup on failure + success := false + defer func() { + if !success { + _ = r.assetUsecase.DeleteAsset(ctx, assetID) + } + }() // Upload file content if err := r.assetUsecase.UploadAssetContent(ctx, assetID, FileFromUpload(&input.File)); err != nil { return nil, err } // Update asset status to active asset.UpdateStatus(entity.StatusActive, "") if err := r.assetUsecase.UpdateAsset(ctx, asset); err != nil { return nil, err } + success = true return &UploadAssetPayload{ Asset: AssetFromDomain(asset), }, nil }
67-88
: 🛠️ Refactor suggestionAdd validation for metadata fields.
The method does not validate the
name
,size
, orcontentType
fields before updating the asset.Reconsider unconditional status transition to Active.
The resolver unconditionally sets the status to Active, which could potentially interrupt ongoing operations.
128-163
:⚠️ Potential issueValidate existence of target WorkspaceID or ProjectID in MoveAsset.
The asset's WorkspaceID or ProjectID is updated without verifying if the provided IDs exist.
🧹 Nitpick comments (30)
asset/domain/types.go (1)
22-28
: Consider adding error validation in Invalid constructor.While the current implementation is good, consider validating that at least one error is provided when calling
Invalid()
. This would prevent creating invalid states without any explanation.func Invalid(errors ...error) ValidationResult { + if len(errors) == 0 { + errors = []error{errors.New("invalid state: no specific errors provided")} + } return ValidationResult{ IsValid: false, Errors: errors, } }asset/domain/tests/types_test.go (2)
11-49
: LGTM! Well-structured table-driven tests.The test cases are well-organized and cover the main scenarios effectively.
Consider adding these additional test cases:
- Empty error slice for invalid result
- Multiple errors for invalid result
- Nil error in error slice
Example:
{ name: "invalid result with empty errors", isValid: false, errors: []error{}, want: domain.ValidationResult{ IsValid: false, Errors: []error{}, }, }, { name: "invalid result with multiple errors", isValid: false, errors: []error{errors.New("error1"), errors.New("error2")}, want: domain.ValidationResult{ IsValid: false, Errors: []error{errors.New("error1"), errors.New("error2")}, }, },
57-63
: Add test case for multiple errors.Consider adding a test case that validates the behavior with multiple errors to ensure the
Invalid
constructor handles multiple errors correctly.func TestInvalidWithMultipleErrors(t *testing.T) { err1 := errors.New("test error 1") err2 := errors.New("test error 2") result := domain.Invalid(err1, err2) assert.False(t, result.IsValid) assert.Len(t, result.Errors, 2) assert.Equal(t, err1.Error(), result.Errors[0].Error()) assert.Equal(t, err2.Error(), result.Errors[1].Error()) }asset/domain/service/service.go (4)
12-17
: Consider enhancing the Storage interface with additional operations.The interface covers basic operations but could benefit from additional methods to make it more robust:
- Add
Exists
method to check if a file exists- Add
List
method to enumerate files- Consider including content-type parameter in
Upload
Example enhancement:
type Storage interface { - Upload(ctx context.Context, workspaceID id.WorkspaceID, name string, content io.Reader) (string, int64, error) + Upload(ctx context.Context, workspaceID id.WorkspaceID, name string, contentType string, content io.Reader) (string, int64, error) Download(ctx context.Context, url string) (io.ReadCloser, error) Delete(ctx context.Context, url string) error + Exists(ctx context.Context, url string) (bool, error) + List(ctx context.Context, workspaceID id.WorkspaceID) ([]string, error) }
19-23
: Consider enhancing error handling in the Extractor interface.The interface could benefit from:
- Custom error types for different extraction failures
- Progress tracking for large file extractions
Example enhancement:
// ExtractError represents specific extraction error cases type ExtractError struct { Type string // e.g., "format_unsupported", "corrupted_file" Message string } type Extractor interface { Extract(ctx context.Context, asset *entity.Asset, progress chan<- float64) *ExtractError IsExtractable(contentType string) bool }
25-33
: Consider decoupling the Move operation parameters.The
Move
method combines bothprojectID
andgroupID
parameters, which might be too tightly coupled. Consider splitting this into separate operations for more flexibility.Example enhancement:
type AssetService interface { Upload(ctx context.Context, workspaceID id.WorkspaceID, name string, content io.Reader) (*entity.Asset, error) Download(ctx context.Context, assetID id.ID) (io.ReadCloser, error) Extract(ctx context.Context, assetID id.ID) error - Move(ctx context.Context, assetID id.ID, projectID id.ProjectID, groupID id.GroupID) error + AssignToProject(ctx context.Context, assetID id.ID, projectID id.ProjectID) error + AssignToGroup(ctx context.Context, assetID id.ID, groupID id.GroupID) error Delete(ctx context.Context, assetID id.ID) error SetEventPublisher(publisher event.Publisher) }
35-41
: Consider using an options pattern for the Update method.The
Update
method combines multiple fields which might not all need to be updated at once. Consider using an options pattern for more flexible updates.Example enhancement:
type GroupUpdateOptions struct { Name *string Policy *string Description *string } type GroupService interface { Create(ctx context.Context, name string, policy string) (*entity.Group, error) Update(ctx context.Context, id id.GroupID, opts GroupUpdateOptions) (*entity.Group, error) Delete(ctx context.Context, id id.GroupID) error SetEventPublisher(publisher event.Publisher) }asset/domain/event/publisher.go (2)
15-19
: Consider using type-safe event types.The interface is well-structured, but using string for eventType could lead to runtime errors. Consider:
- Using typed constants or an enum-like approach for eventType
- Adding validation for empty eventType and nil handler
Example improvement:
type EventType string const ( AssetCreated EventType = "asset.created" AssetUpdated EventType = "asset.updated" // ... other event types ) type Subscriber interface { Subscribe(eventType EventType, handler Handler) error Unsubscribe(eventType EventType, handler Handler) error }
1-25
: Consider adding event persistence and error handling strategies.The event system design is clean and follows good practices. Consider these architectural improvements:
- Add event persistence strategy for reliability
- Define error handling and retry policies
- Consider adding event versioning for future compatibility
- Add event validation hooks
- Consider implementing event filtering capabilities
asset/domain/builder/asset.go (3)
11-17
: Add godoc comments for better documentation.While the implementation is correct, adding godoc comments would improve code documentation and IDE support.
+// AssetBuilder provides a fluent interface for constructing Asset entities type AssetBuilder struct { a *entity.Asset } +// NewAssetBuilder creates a new AssetBuilder instance with an empty Asset func NewAssetBuilder() *AssetBuilder { return &AssetBuilder{a: &entity.Asset{}} }
42-48
: Consider documenting panic behavior in MustBuild.While the implementation is correct, MustBuild's panic behavior should be clearly documented as it could affect program stability.
+// MustBuild builds the Asset and panics if validation fails. +// Use this only when you are certain that the build will succeed. func (b *AssetBuilder) MustBuild() *entity.Asset {
104-128
: Simplify CreatedAt implementation.The current implementation recreates the asset and manually restores all fields, which is fragile and could miss future fields. Consider adding a direct CreatedAt setter in the entity.
func (b *AssetBuilder) CreatedAt(createdAt time.Time) *AssetBuilder { - // We need to create a new asset to set createdAt - b.a = entity.NewAsset(b.a.ID(), b.a.Name(), b.a.Size(), b.a.ContentType()) - // Restore other fields - if b.a.GroupID() != (id.GroupID{}) { - b.GroupID(b.a.GroupID()) - } - // ... more field restorations ... + b.a.SetCreatedAt(createdAt) // Suggested new method in entity return b }asset/domain/tests/errors_test.go (2)
16-86
: Consider standardizing error message patterns.While the error messages are clear, consider standardizing the patterns for consistency:
- Operation-related errors (upload/download/delete) could follow a consistent pattern:
- "failed to upload asset" → "asset upload failed"
- "failed to download asset" → "asset download failed"
- "failed to delete asset" → "asset deletion failed"
- Validation errors could start with the subject:
- "workspace id is required" → "workspace ID is required" (fix capitalization)
- "url is required" → "URL is required" (fix capitalization)
88-93
: Consider enabling parallel test execution.Since these are independent test cases, they can be run in parallel to improve test execution time.
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { + t.Parallel() assert.Equal(t, tt.want, tt.err.Error()) }) }
asset/repository/event.go (1)
22-29
: Consider adding validation for AssetEvent fieldsThe
AssetEvent
struct contains critical fields but lacks validation. Consider adding validation to ensure required fields are properly set.type AssetEvent struct { Type EventType AssetID id.ID WorkspaceID id.WorkspaceID ProjectID id.ProjectID Status string Error string } +// NewAssetEvent creates a new AssetEvent with validation +func NewAssetEvent(typ EventType, assetID id.ID, workspaceID id.WorkspaceID, projectID id.ProjectID, status string, err string) (*AssetEvent, error) { + if typ == "" { + return nil, errors.New("event type is required") + } + if assetID.IsEmpty() { + return nil, errors.New("asset ID is required") + } + return &AssetEvent{ + Type: typ, + AssetID: assetID, + WorkspaceID: workspaceID, + ProjectID: projectID, + Status: status, + Error: err, + }, nil +}asset/domain/repository/repository.go (2)
10-18
: Consider adding pagination and documentationThe Asset interface could benefit from:
- Pagination support for list operations (FindByWorkspace, FindByProject, FindByGroup)
- Method documentation explaining behavior, errors, and constraints
- Batch operations for better performance
Example implementation:
+// Asset provides operations for managing assets type Asset interface { + // Save stores or updates the asset + // Returns ErrAssetNotFound if the asset doesn't exist for updates Save(ctx context.Context, asset *entity.Asset) error + + // FindByID retrieves an asset by its ID + // Returns ErrAssetNotFound if not found FindByID(ctx context.Context, id id.ID) (*entity.Asset, error) + FindByIDs(ctx context.Context, ids []id.ID) ([]*entity.Asset, error) - FindByWorkspace(ctx context.Context, workspaceID id.WorkspaceID) ([]*entity.Asset, error) + // FindByWorkspace retrieves paginated assets for a workspace + FindByWorkspace(ctx context.Context, workspaceID id.WorkspaceID, pagination *Pagination) (*AssetList, error) // ... similar changes for other methods + + // SaveBatch stores multiple assets in a single operation + SaveBatch(ctx context.Context, assets []*entity.Asset) error } +// Pagination defines pagination parameters +type Pagination struct { + Limit int + Offset int +} + +// AssetList represents a paginated list of assets +type AssetList struct { + Items []*entity.Asset + TotalCount int + HasMore bool +}
20-25
: Add consistency in Group interface with Asset interfaceThe Group interface should follow similar patterns as the Asset interface for consistency.
type Group interface { + // Add similar documentation and pagination support as Asset interface Save(ctx context.Context, group *entity.Group) error FindByID(ctx context.Context, id id.GroupID) (*entity.Group, error) FindByIDs(ctx context.Context, ids []id.GroupID) ([]*entity.Group, error) + // Add methods for listing groups by workspace + FindByWorkspace(ctx context.Context, workspaceID id.WorkspaceID, pagination *Pagination) (*GroupList, error) Remove(ctx context.Context, id id.GroupID) error }asset/domain/event/event.go (1)
16-27
: Enhance BaseEvent with additional metadataConsider adding more metadata to BaseEvent to improve debugging and tracing capabilities.
type BaseEvent struct { occurredAt time.Time + // Add useful metadata + eventID string // Unique identifier for the event + causationID string // ID of the event that caused this event + correlationID string // ID to correlate related events + source string // Source of the event (service name, etc.) } -func NewBaseEvent() BaseEvent { +func NewBaseEvent(opts ...EventOption) BaseEvent { + eventID := uuid.New().String() return BaseEvent{ occurredAt: time.Now(), + eventID: eventID, + // ... initialize other fields } } +// EventOption allows customizing event creation +type EventOption func(*BaseEvent) + +func WithCorrelationID(id string) EventOption { + return func(e *BaseEvent) { + e.correlationID = id + } +}asset/domain/event/tests/event_test.go (1)
13-20
: Consider making the timestamp test more resilient.While the current implementation works, time-based tests can be flaky under system load. Consider:
- Adding a small buffer to the time window
- Using a mock clock for deterministic testing
- Testing additional base event properties if any exist
Example improvement:
func TestBaseEvent(t *testing.T) { before := time.Now() e := event.NewBaseEvent() after := time.Now() + buffer := time.Millisecond * 100 - assert.True(t, e.OccurredAt().After(before) || e.OccurredAt().Equal(before)) - assert.True(t, e.OccurredAt().Before(after) || e.OccurredAt().Equal(after)) + assert.True(t, e.OccurredAt().After(before.Add(-buffer)) || e.OccurredAt().Equal(before)) + assert.True(t, e.OccurredAt().Before(after.Add(buffer)) || e.OccurredAt().Equal(after)) }asset/domain/entity/asset.go (1)
9-16
: Consider implementing status transition validationThe Status type implementation could benefit from enforcing valid state transitions to prevent invalid status changes (e.g., from ERROR back to PENDING).
Consider adding a method to validate status transitions:
+func (s Status) CanTransitionTo(target Status) bool { + transitions := map[Status][]Status{ + StatusPending: {StatusActive, StatusExtracting, StatusError}, + StatusExtracting: {StatusActive, StatusError}, + StatusActive: {StatusError}, + StatusError: {}, + } + allowed, exists := transitions[s] + return exists && contains(allowed, target) +}asset/domain/entity/tests/asset_test.go (1)
43-64
: Add edge case tests for metadata updatesThe UpdateMetadata test should include cases for invalid URLs and content types.
func TestAsset_UpdateMetadata_InvalidURL(t *testing.T) { asset := entity.NewAsset(id.NewID(), "test.jpg", 1024, "image/jpeg") originalURL := asset.URL() asset.UpdateMetadata("", "invalid-url", "") assert.Equal(t, originalURL, asset.URL()) }asset/infrastructure/pubsub/pubsub_test.go (3)
88-120
: Add concurrent event handling test cases.The test should verify that event handling works correctly under concurrent conditions.
Add concurrent test cases:
func TestAssetPubSub_ConcurrentEventHandling(t *testing.T) { ps := NewAssetPubSub(&mockPublisher{}, "test-topic") var wg sync.WaitGroup numGoroutines := 10 // Create multiple subscribers for i := 0; i < numGoroutines; i++ { wg.Add(1) go func() { defer wg.Done() ps.Subscribe(repository.EventTypeAssetCreated, func(ctx context.Context, event repository.AssetEvent) { // Handle event }) }() } // Publish events concurrently for i := 0; i < numGoroutines; i++ { wg.Add(1) go func() { defer wg.Done() asset := entity.NewAsset(id.NewID(), "test.txt", 100, "text/plain") ps.PublishAssetCreated(context.Background(), asset) }() } wg.Wait() }
122-154
: Add test cases for edge cases in unsubscribe functionality.The test should verify behavior when unsubscribing from non-existent subscriptions and multiple unsubscribe calls.
Add test cases for:
- Unsubscribing from a non-existent subscription
- Multiple unsubscribe calls with the same handler
- Unsubscribing while events are being processed
func TestAssetPubSub_UnsubscribeEdgeCases(t *testing.T) { ps := NewAssetPubSub(&mockPublisher{}, "test-topic") handler := func(ctx context.Context, event repository.AssetEvent) {} // Test unsubscribing from non-existent subscription ps.Unsubscribe(repository.EventTypeAssetCreated, handler) // Test multiple unsubscribe calls ps.Subscribe(repository.EventTypeAssetCreated, handler) ps.Unsubscribe(repository.EventTypeAssetCreated, handler) ps.Unsubscribe(repository.EventTypeAssetCreated, handler) // Should not panic }
156-223
: Add test cases for invalid metadata scenarios.The test should verify behavior when publishing events with invalid or missing metadata.
Add test cases for:
- Missing required metadata fields
- Invalid metadata values
- Maximum metadata size limits
func TestAssetPubSub_PublishEventsWithInvalidMetadata(t *testing.T) { ps := NewAssetPubSub(&mockPublisher{}, "test-topic") ctx := context.Background() // Test missing metadata asset := entity.NewAsset(id.NewID(), "", 100, "") err := ps.PublishAssetCreated(ctx, asset) assert.Error(t, err) // Test invalid content type asset = entity.NewAsset(id.NewID(), "test.txt", 100, "invalid/type/") err = ps.PublishAssetCreated(ctx, asset) assert.Error(t, err) }asset/infrastructure/gcs/client_test.go (1)
575-607
: Add test cases for pagination and filtering.The test should verify behavior when listing objects with pagination and filters.
Add test cases for:
- Listing objects with pagination
- Listing objects with filters
- Listing objects with sorting
func TestClient_ListWithPaginationAndFilters(t *testing.T) { client := newTestClient(t) // Create 100 test objects for i := 0; i < 100; i++ { id := id.NewID() objPath := client.objectPath(id) client.mockBucket.objects[objPath] = &mockObject{ bucket: client.mockBucket, name: objPath, attrs: &storage.ObjectAttrs{ Name: objPath, Metadata: map[string]string{ "name": fmt.Sprintf("asset%d", i), "content_type": "application/json", }, }, } } // Test pagination pageSize := 10 var allAssets []*entity.Asset var nextPageToken string for { assets, token, err := client.ListWithPagination(context.Background(), pageSize, nextPageToken) assert.NoError(t, err) allAssets = append(allAssets, assets...) if token == "" { break } nextPageToken = token } assert.Len(t, allAssets, 100) // Test filtering jsonAssets, err := client.ListWithFilter(context.Background(), "content_type", "application/json") assert.NoError(t, err) assert.Len(t, jsonAssets, 100) }asset/domain/builder/tests/group_test.go (3)
14-80
: Add test cases for edge cases.Consider adding the following test cases to improve coverage:
- Empty description handling
- Empty policy handling (already tested in TestGroupBuilder_InvalidSetters but good to include here for completeness)
- Group test cases by success/failure scenarios for better readability
82-101
: Enhance panic tests with specific failure scenarios.Consider adding test cases for different failure scenarios that should trigger panic:
- Missing ID
- Missing name
- Invalid name
125-134
: Add test for ID uniqueness.Consider adding a test case to verify that multiple calls to NewID() generate different IDs:
b1 := builder.NewGroupBuilder().NewID().Name("test-group") b2 := builder.NewGroupBuilder().NewID().Name("test-group") g1, _ := b1.Build() g2, _ := b2.Build() assert.NotEqual(t, g1.ID(), g2.ID())asset/domain/builder/tests/asset_test.go (1)
14-114
: Add test cases for content type and size validation.Consider adding the following test cases:
- Invalid content type (empty or malformed MIME type)
- Negative size value
asset/graphql/schema.resolvers.go (1)
51-65
: Add URL validation.Consider validating the returned URL to ensure it's properly formed and accessible:
- Check if the URL is well-formed
- Verify URL scheme (https)
- Validate URL expiration if applicable
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
asset/domain/builder/asset.go
(1 hunks)asset/domain/builder/group.go
(1 hunks)asset/domain/builder/tests/asset_test.go
(1 hunks)asset/domain/builder/tests/group_test.go
(1 hunks)asset/domain/entity/asset.go
(1 hunks)asset/domain/entity/group.go
(1 hunks)asset/domain/entity/tests/asset_test.go
(1 hunks)asset/domain/entity/tests/group_test.go
(1 hunks)asset/domain/errors.go
(1 hunks)asset/domain/event/event.go
(1 hunks)asset/domain/event/publisher.go
(1 hunks)asset/domain/event/tests/event_test.go
(1 hunks)asset/domain/repository/repository.go
(1 hunks)asset/domain/service/service.go
(1 hunks)asset/domain/tests/errors_test.go
(1 hunks)asset/domain/tests/types_test.go
(1 hunks)asset/domain/types.go
(1 hunks)asset/graphql/helper.go
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/infrastructure/decompress/zip.go
(1 hunks)asset/infrastructure/decompress/zip_test.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)asset/infrastructure/gcs/client_test.go
(1 hunks)asset/infrastructure/pubsub/pubsub.go
(1 hunks)asset/infrastructure/pubsub/pubsub_test.go
(1 hunks)asset/repository/decompressor_repository.go
(1 hunks)asset/repository/event.go
(1 hunks)asset/repository/group_repository.go
(1 hunks)asset/repository/persistence_repository.go
(1 hunks)asset/repository/pubsub_repository.go
(1 hunks)asset/usecase/interactor/interactor.go
(1 hunks)asset/usecase/usecase.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
- asset/repository/persistence_repository.go
- asset/graphql/helper.go
- asset/repository/group_repository.go
- asset/repository/decompressor_repository.go
- asset/repository/pubsub_repository.go
- asset/infrastructure/gcs/client.go
- asset/usecase/usecase.go
- asset/infrastructure/decompress/zip_test.go
- asset/infrastructure/decompress/zip.go
🔇 Additional comments (23)
asset/domain/types.go (1)
4-7
: LGTM! Clean and well-documented domain type.The
ValidationResult
struct follows good domain modeling practices with clear separation of concerns.asset/domain/service/service.go (1)
1-11
: LGTM! Well-structured package with clean imports.The package follows domain-driven design principles with clear separation of concerns.
asset/domain/event/publisher.go (4)
1-4
: LGTM! Clean package structure and imports.The package name and imports are appropriate for the domain event handling functionality.
10-13
: LGTM! Well-designed handler interface.The Handler interface follows Go's best practices with a focused single responsibility and proper error handling.
21-25
: LGTM! Clean interface composition.The EventBus interface effectively combines Publisher and Subscriber functionalities while maintaining interface segregation.
5-8
: Verify Event type definition and consider input validation.The interface is well-designed, but there are a few considerations:
- The
Event
type is used but not visible in this file.- Consider adding validation for the
events
parameter to handle empty or nil cases.Let's verify the Event type definition:
✅ Verification successful
Event interface is well-defined and follows good practices
The Event interface is properly defined in
asset/domain/event/event.go
with essential methodsEventType()
andOccurredAt()
. Regarding validation, it's better to keep the interface minimal and handle input validation in the concrete implementations of the Publisher interface.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for Event type definition ast-grep --pattern 'type Event interface { $$$ }' # Search for Event type usage to understand the contract rg -t go "Event.*interface" -A 5Length of output: 832
asset/domain/builder/asset.go (4)
1-9
: LGTM! Clean package structure and imports.The package organization follows Go conventions, and imports are well-structured.
19-40
: LGTM! Comprehensive validation in Build method.The Build method properly validates all required fields and sets sensible defaults.
130-133
: LGTM! Well-documented UpdatedAt behavior.The comment clearly explains why UpdatedAt is a no-op method.
50-53
: Review ID setter implementation.The ID setter creates a new Asset instance which might unintentionally reset other fields. Consider implementing a simpler ID update mechanism in the entity.
asset/domain/tests/errors_test.go (3)
1-8
: LGTM! Well-structured test file setup.The package declaration and imports follow Go best practices. Good choice using testify/assert for cleaner assertions.
10-15
: LGTM! Well-structured table-driven test.The test structure follows Go best practices with a clear and concise test table definition.
1-93
: Verify complete error coverage.Let's ensure all domain errors are covered by the tests.
✅ Verification successful
All domain errors are properly tested ✅
All 14 error types declared in
asset/domain/errors.go
have corresponding test cases in the test file that verify their error messages. The coverage is complete.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Find all error declarations in the domain package and compare with tested errors echo "=== Declared Errors ===" rg "^var Err[A-Z][A-Za-z]* = errors\." ../domain/ echo -e "\n=== Tested Errors ===" rg "domain\.Err[A-Z][A-Za-z]*," errors_test.goLength of output: 322
Script:
#!/bin/bash # Find all error declarations in the domain package and compare with tested errors echo "=== Declared Errors ===" rg "^var Err[A-Z][A-Za-z]* = errors\." asset/domain/ echo -e "\n=== Tested Errors ===" rg "domain\.Err[A-Z][A-Za-z]*," asset/domain/tests/errors_test.goLength of output: 708
Script:
#!/bin/bash echo "=== Searching for error declarations (multiple patterns) ===" # Look for error declarations with various patterns rg -i "err.*=|var.*err|const.*err" asset/domain/errors.go || true rg -i "errors\.New|fmt\.Errorf" asset/domain/errors.go || true echo -e "\n=== Checking if errors.go exists ===" fd "errors.go" asset/domain/Length of output: 1978
asset/domain/event/tests/event_test.go (2)
1-11
: LGTM! Well-structured package and imports.The package follows Go conventions, and the imports are appropriate for the testing requirements.
48-72
: Consider testing group relationships and constraints.While the basic group operations are well-tested, consider adding:
- Tests for group name validation rules
- Parent-child relationship tests if supported
- Tests for maximum group depth/size if applicable
- Event ordering verification in complex scenarios
Let's check if there are group constraints we should test:
asset/usecase/interactor/interactor.go (1)
34-36
: Handle Pub/Sub Publish Failures ConsistentlyIn methods like
CreateAsset
,UpdateAsset
,DeleteAsset
,UploadAssetContent
, andDecompressZipContent
, errors from publishing events are logged but not returned to the caller. This could lead to silent failures in the event system and inconsistent states.Consider implementing a consistent strategy for handling publish failures, such as:
- Returning the error to the caller for appropriate handling.
- Implementing a retry mechanism with exponential backoff.
- Using a circuit breaker pattern to manage transient failures.
Also applies to: 52-54, 65-67, 84-86, 124-126
asset/domain/errors.go (1)
5-27
: Error Definitions Are Clear and Well-OrganizedThe error variables are appropriately defined and categorized, providing clear and descriptive messages. This enhances maintainability and consistency in error handling across the domain.
asset/domain/builder/group.go (1)
73-85
: 🛠️ Refactor suggestionOptimize CreatedAt implementation
The CreatedAt method recreates the entire group instance to set the creation time. This is inefficient and could be simplified.
func (b *GroupBuilder) CreatedAt(createdAt time.Time) *GroupBuilder { - // We need to create a new group to set createdAt - b.g = entity.NewGroup(b.g.ID(), b.g.Name()) - // Restore other fields - if b.g.Policy() != "" { - b.Policy(b.g.Policy()) - } - if b.g.Description() != "" { - b.Description(b.g.Description()) - } + b.g.createdAt = createdAt return b }Likely invalid or redundant comment.
asset/domain/builder/tests/group_test.go (2)
103-123
: LGTM!Comprehensive test coverage for all setter methods with proper assertions.
136-155
: LGTM!Good coverage of edge cases for invalid inputs.
asset/domain/builder/tests/asset_test.go (3)
116-137
: Enhance panic tests with specific failure scenarios.Consider adding test cases for different failure scenarios that should trigger panic:
- Missing ID
- Missing workspace ID
- Invalid size
- Invalid URL
139-174
: LGTM!Comprehensive test coverage for all setter methods with proper assertions.
176-190
: Add test for ID uniqueness.Consider adding a test case to verify that multiple calls to NewID() generate different IDs:
b1 := builder.NewAssetBuilder().NewID(). Name("test.jpg"). Size(1024). ContentType("image/jpeg"). WorkspaceID(id.NewWorkspaceID()). URL("https://example.com/test1.jpg") b2 := builder.NewAssetBuilder().NewID(). Name("test.jpg"). Size(1024). ContentType("image/jpeg"). WorkspaceID(id.NewWorkspaceID()). URL("https://example.com/test2.jpg") a1, _ := b1.Build() a2, _ := b2.Build() assert.NotEqual(t, a1.ID(), a2.ID())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (1)
asset/domain/entity/group.go (1)
21-29
:⚠️ Potential issueValidate
name
and initialize required fields inNewGroup
constructorThe
NewGroup
constructor does not validate if thename
parameter is empty. Since theUpdateName
method returns an error when provided an empty name, it's important to enforce the same validation during group creation to maintain data integrity.Additionally, the
policy
field is required in theValidate
method but is not initialized in the constructor. This inconsistency can lead to validation failures when creating a newGroup
.Apply this diff to update the constructor to include validation and initialization:
-func NewGroup(id id.GroupID, name string) *Group { +func NewGroup(id id.GroupID, name string, policy string) (*Group, error) { + if name == "" { + return nil, domain.ErrEmptyGroupName + } + if policy == "" { + return nil, domain.ErrEmptyPolicy + } now := time.Now() return &Group{ id: id, name: name, + policy: policy, createdAt: now, updatedAt: now, } + }, nil }Update the function signature to return both
*Group
anderror
, and handle the error accordingly when creating a new group.
🧹 Nitpick comments (6)
asset/domain/entity/asset.go (2)
11-18
: Consider enhancing type safety for Status.While the current implementation is functional, you could improve type safety by:
- Making the Status type unexported to prevent external packages from creating invalid status values
- Adding a method to validate status values
-type Status string +type status string + +type Status = status + +func (s Status) IsValid() bool { + switch s { + case StatusPending, StatusActive, StatusExtracting, StatusError: + return true + default: + return false + } +}
20-33
: Consider renaming the 'error' field and adding serialization tags.
- The field name 'error' might be confusing as it shadows the built-in error type.
- Consider adding serialization tags if this struct needs to be marshaled/unmarshaled.
type Asset struct { id id.ID groupID id.GroupID projectID id.ProjectID workspaceID id.WorkspaceID name string size int64 url string contentType string status Status - error string + errorMsg string createdAt time.Time updatedAt time.Time }asset/domain/event/store.go (1)
8-14
: Consider adding field validation and serialization tags.While the struct fields are well-chosen for event sourcing, consider these improvements:
- Add JSON/BSON tags for consistent serialization
- Add validation for UserID and AggregateID (e.g., non-empty constraints)
type EventMetadata struct { - Version int - Timestamp time.Time - UserID string - AggregateID string + Version int `json:"version" bson:"version"` + Timestamp time.Time `json:"timestamp" bson:"timestamp"` + UserID string `json:"user_id" bson:"user_id"` + AggregateID string `json:"aggregate_id" bson:"aggregate_id"` }asset/domain/entity/group.go (1)
79-83
: Enforcedescription
length constraint inUpdateDescription
methodThe
UpdateDescription
method does not validate the maximum length of thedescription
. Since theValidate
method enforces a maximum length of 500 characters fordescription
, it's advisable to ensure that updates comply with this constraint.Apply this diff to add length validation:
func (g *Group) UpdateDescription(description string) error { + if len(description) > 500 { + return domain.ErrDescriptionTooLong + } g.description = description g.updatedAt = time.Now() return nil }Ensure that
ErrDescriptionTooLong
is defined in thedomain
package.asset/domain/validation/validator.go (2)
128-141
: ExtendRequiredRule
to handle empty collectionsCurrently, the
RequiredRule
only checks for nil and empty strings. To ensure that empty collections (slices, arrays, maps) are also considered invalid, consider extending the validation logic in theValidate
method.Modify the switch statement to handle additional types:
func (r *RequiredRule) Validate(ctx context.Context, value interface{}) error { if value == nil { return NewValidationError(r.Field, "field is required") } switch v := value.(type) { case string: if v == "" { return NewValidationError(r.Field, "field is required") } + case []interface{}: + if len(v) == 0 { + return NewValidationError(r.Field, "field is required") + } + case map[string]interface{}: + if len(v) == 0 { + return NewValidationError(r.Field, "field is required") + } + case []string: + if len(v) == 0 { + return NewValidationError(r.Field, "field is required") + } + // Add additional cases as needed } return nil }This ensures that empty collections are appropriately flagged by the
RequiredRule
.
113-114
: Correct the comment forNewValidationError
functionThe comment above the
NewValidationError
function incorrectly statesValidationError creates a new validation error
. It should match the function name to maintain consistency and clarity.Apply this correction:
-// ValidationError creates a new validation error +// NewValidationError creates a new validation error func NewValidationError(field, message string) *ValidationError { return &ValidationError{ Field: field, Message: message, } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
asset/domain/entity/asset.go
(1 hunks)asset/domain/entity/group.go
(1 hunks)asset/domain/event/store.go
(1 hunks)asset/domain/validation/validator.go
(1 hunks)asset/domain/validation/validator_test.go
(1 hunks)
🔇 Additional comments (8)
asset/domain/entity/asset.go (6)
1-9
: LGTM! Clean package structure and imports.The package name and imports are well-organized and follow Go conventions.
69-82
: LGTM! Well-implemented getters.The getters are properly implemented and follow Go conventions.
89-100
: Add validation for metadata updates.The UpdateMetadata method should validate the URL format and content type. Currently, it accepts any non-empty string without validation.
117-120
: Add size validation.The SetSize method should validate that the size is non-negative.
35-46
: 🛠️ Refactor suggestionAdd size validation in constructor.
The constructor should validate that the size is non-negative.
func NewAsset(id id.ID, name string, size int64, contentType string) *Asset { + if size < 0 { + size = 0 + } now := time.Now() return &Asset{ id: id, name: name, size: size, contentType: contentType, status: StatusPending, createdAt: now, updatedAt: now, } }Likely invalid or redundant comment.
48-67
: 🛠️ Refactor suggestionEnhance validation rules.
The Validate method should include:
- URL format validation
- Size validation (non-negative)
- Content-type format validation
func (a *Asset) Validate(ctx context.Context) validation.ValidationResult { validationCtx := validation.NewValidationContext( &validation.RequiredRule{Field: "id"}, &validation.RequiredRule{Field: "name"}, &validation.MaxLengthRule{Field: "name", MaxLength: 255}, &validation.RequiredRule{Field: "url"}, + &validation.URLFormatRule{Field: "url"}, &validation.RequiredRule{Field: "contentType"}, + &validation.ContentTypeFormatRule{Field: "contentType"}, + &validation.MinValueRule{Field: "size", MinValue: 0}, ) fields := map[string]interface{}{ "id": a.id, "name": a.name, "url": a.url, "contentType": a.contentType, + "size": a.size, } return validationCtx.Validate(ctx, fields) }Likely invalid or redundant comment.
asset/domain/event/store.go (1)
1-7
: LGTM! Clean package structure with minimal dependencies.The package follows Go conventions and imports only necessary standard library packages.
asset/domain/validation/validator_test.go (1)
10-105
: Unit tests are comprehensive and cover key validation scenariosThe unit tests thoroughly exercise the validation logic, including both positive and negative cases for each rule. This ensures robustness of the validation framework and confidence that future changes will not inadvertently introduce bugs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (9)
asset/usecase/transaction.go (1)
24-26
: Consider adding type assertion helperThe
GetTransactionFromContext
returnsinterface{}
which requires type assertion by callers. Consider adding a type-safe helper method if there's a specific expected type.Example:
func GetTransactionFromContext(ctx context.Context) interface{} { return ctx.Value(transactionKey{}) } +// GetTypedTransactionFromContext returns the transaction with type assertion +func GetTypedTransactionFromContext[T any](ctx context.Context) (T, bool) { + v := ctx.Value(transactionKey{}) + if v == nil { + var zero T + return zero, false + } + t, ok := v.(T) + return t, ok +}asset/usecase/result.go (1)
17-23
: Consider using generics for type-safe ResultThe
Data interface{}
field requires type assertion by callers. Consider using generics to make it type-safe.Example:
-type Result struct { +type Result[T any] struct { Code ResultCode - Data interface{} + Data T Errors []*Error Message string }asset/infrastructure/memory/decompress_job.go (1)
11-15
: Consider adding capacity to jobs mapIf you expect a large number of concurrent jobs, consider initializing the map with an expected capacity to avoid reallocations.
Example:
func NewDecompressJobRepository() *DecompressJobRepository { return &DecompressJobRepository{ - jobs: make(map[string]*assetusecase.DecompressStatus), + jobs: make(map[string]*assetusecase.DecompressStatus, 100), // adjust capacity based on expected concurrent jobs } }asset/infrastructure/memory/decompress_job_test.go (3)
71-74
: Add assertion for job status filteringThe List test verifies the count of returned jobs but doesn't assert which specific jobs are returned. Add assertions to verify that only jobs with "processing" and "pending" status are included in the results.
jobs, err := repo.List(ctx) assert.NoError(t, err) assert.Len(t, jobs, 2) +// Verify only active jobs are returned +for _, job := range jobs { + assert.Contains(t, []string{"processing", "pending"}, job.Status) +}
115-117
: Add error handling for Get callThe test ignores potential error from repo.Get when verifying progress update.
-got, _ := repo.Get(ctx, "job1") +got, err := repo.Get(ctx, "job1") +assert.NoError(t, err) assert.Equal(t, 50.0, got.Progress)
143-144
: Add error handling for Get calls in Complete and Fail testsSimilar to the previous comment, error handling is missing for repo.Get calls.
-got, _ := repo.Get(ctx, "job1") +got, err := repo.Get(ctx, "job1") +assert.NoError(t, err) assert.Equal(t, "completed", got.Status) -got, _ = repo.Get(ctx, "job2") +got, err = repo.Get(ctx, "job2") +assert.NoError(t, err) assert.Equal(t, "failed", got.Status)Also applies to: 150-151
asset/usecase/usecase.go (2)
25-26
: Add validation for Status field in DecompressStatusThe Status field accepts any string value. Consider using an enumeration or constants for valid status values ("pending", "processing", "completed", "failed").
+// DecompressStatusType represents the status of a decompression job +type DecompressStatusType string + +const ( + StatusPending DecompressStatusType = "pending" + StatusProcessing DecompressStatusType = "processing" + StatusCompleted DecompressStatusType = "completed" + StatusFailed DecompressStatusType = "failed" +) type DecompressStatus struct { JobID string AssetID id.ID - Status string // "pending", "processing", "completed", "failed" + Status DecompressStatusType Progress float64 // 0-100 Error string StartedAt time.Time CompletedAt time.Time }
50-52
: Add documentation for DecompressZipContent return typeThe method returns a channel but its type and usage are not documented.
-// DecompressZipContent decompresses zip content and returns a channel of decompressed files +// DecompressZipContent decompresses zip content and returns a channel of decompressed files. +// The returned channel provides a stream of *DecompressedFile objects. +// The channel will be closed once all files have been processed or if an error occurs. DecompressZipContent(ctx context.Context, content []byte) *Resultasset/usecase/interactor/interactor.go (1)
322-324
: TODO comment needs implementationThe content transformation is marked as TODO but no implementation is provided.
Would you like me to help implement the content transformation functionality or create a GitHub issue to track this task?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
asset/graphql/schema.resolvers.go
(1 hunks)asset/infrastructure/memory/decompress_job.go
(1 hunks)asset/infrastructure/memory/decompress_job_test.go
(1 hunks)asset/repository/decompress_job.go
(1 hunks)asset/usecase/interactor/interactor.go
(1 hunks)asset/usecase/result.go
(1 hunks)asset/usecase/transaction.go
(1 hunks)asset/usecase/usecase.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- asset/graphql/schema.resolvers.go
🔇 Additional comments (2)
asset/usecase/interactor/interactor.go (2)
199-199
: Use typed context key for assetIDUsing string keys in context can lead to collisions.
This is a duplicate of a previous review comment. Please implement typed context keys as suggested in the earlier review.
290-295
: Continue deleting assets even if some deletions failThe current implementation stops on the first failure.
This is a duplicate of a previous review comment. Please implement the suggested changes to continue deletion and aggregate errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
asset/domain/validation/validator.go (1)
70-91
: 🛠️ Refactor suggestionRefactor
ValidationContext.Validate
method for better extensibilityThe current implementation of
ValidationContext.Validate
uses type assertions to handle specific rule types like*RequiredRule
and*MaxLengthRule
. This approach limits extensibility because adding new rule types would require modifying this method each time.Consider refactoring the
ValidationRule
interface and theValidationContext.Validate
method to handle all rules uniformly by including the field name within theValidationRule
interface and letting each rule manage its own validation logic. This will simplify the validation process and make it easier to add new rules.asset/usecase/interactor/interactor.go (1)
214-270
:⚠️ Potential issueFix potential goroutine leak and improve error handling in background job.
The background goroutine has several issues:
- No context cancellation
- Channel is read twice, which will cause deadlock
- Error handling in job status updates could be improved
🧹 Nitpick comments (16)
asset/infrastructure/memory/decompress_job_test.go (6)
24-24
: Consider using a fixed time value for tests.To improve test reliability and avoid potential flakiness, consider using a fixed time value instead of
time.Now()
. This makes the tests more deterministic and easier to debug.+var testTime = time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC) // Replace all occurrences of time.Now() with testTime - StartedAt: time.Now(), + StartedAt: testTime,Also applies to: 52-52, 58-58, 64-64, 87-87, 112-112, 136-136, 141-141
18-44
: Enhance Save and Get test coverage.While the basic cases are covered, consider adding these test scenarios:
- Save with empty JobID
- Save with invalid status values
- Verify all fields are preserved after Get
t.Run("Save and Get", func(t *testing.T) { + t.Run("empty job id", func(t *testing.T) { + status := &assetusecase.DecompressStatus{ + Status: "pending", + StartedAt: testTime, + } + err := repo.Save(ctx, status) + assert.Equal(t, rerror.ErrInvalidParams, err) + }) + + t.Run("invalid status", func(t *testing.T) { + status := &assetusecase.DecompressStatus{ + JobID: "job1", + Status: "invalid", + StartedAt: testTime, + } + err := repo.Save(ctx, status) + assert.Equal(t, rerror.ErrInvalidParams, err) + })
76-80
: Strengthen List test assertions.The test only verifies the length of returned jobs. Consider adding assertions to verify:
- The exact jobs that are returned
- Their order (if specified in the requirements)
- That completed/failed jobs are indeed excluded
- assert.Len(t, jobs, 2) + assert.Equal(t, 2, len(jobs)) + // Verify exact jobs returned and their order + expectedJobs := []*assetusecase.DecompressStatus{status1, status3} + assert.Equal(t, expectedJobs, jobs) + // Verify completed job is not included + for _, job := range jobs { + assert.NotEqual(t, "completed", job.Status) + assert.NotEqual(t, "failed", job.Status) + }
106-130
: Add boundary tests for progress updates.Consider adding test cases for:
- Negative progress values
- Progress values > 100
- Progress updates on completed/failed jobs
err = repo.UpdateProgress(ctx, "non-existent", 50.0) assert.Equal(t, rerror.ErrNotFound, err) + + // Test invalid progress values + err = repo.UpdateProgress(ctx, "job1", -1.0) + assert.Equal(t, rerror.ErrInvalidParams, err) + + err = repo.UpdateProgress(ctx, "job1", 101.0) + assert.Equal(t, rerror.ErrInvalidParams, err) + + // Test update on completed job + repo.Complete(ctx, "job1") + err = repo.UpdateProgress(ctx, "job1", 75.0) + assert.Equal(t, rerror.ErrInvalidParams, err)
131-171
: Enhance Complete and Fail test validations.Consider adding validations for:
- CompletedAt/FailedAt timestamps
- Preventing multiple completions/failures of the same job
- Completion of failed jobs and vice versa
got, _ := repo.Get(ctx, "job1") assert.Equal(t, "completed", got.Status) assert.Equal(t, 100.0, got.Progress) + assert.NotNil(t, got.CompletedAt) + + // Test multiple completions + err = repo.Complete(ctx, "job1") + assert.Equal(t, rerror.ErrInvalidParams, err) + + // Test completing a failed job + err = repo.Complete(ctx, "job2") + assert.Equal(t, rerror.ErrInvalidParams, err) + + // Test failing a completed job + err = repo.Fail(ctx, "job1", "error") + assert.Equal(t, rerror.ErrInvalidParams, err)
14-17
: Consider structural improvements for better test organization.
- Add test documentation describing the test scenarios
- Enable parallel test execution where possible
- Consider using a test helper for common setup code
+// TestDecompressJobRepository verifies the functionality of the in-memory +// implementation of the DecompressJobRepository interface. It covers basic +// CRUD operations, job status management, and error handling. func TestDecompressJobRepository(t *testing.T) { + t.Parallel() ctx := context.Background() - repo := NewDecompressJobRepository() + + // Helper function for common setup + newTestRepo := func(t *testing.T) *DecompressJobRepository { + t.Helper() + return NewDecompressJobRepository() + }asset/domain/builder/tests/group_test.go (7)
34-45
: Refactor success case to avoid potential panics.The success case uses
panic
within the test setup, which could mask actual issues. Consider refactoring to handle errors explicitly:want: func() *entity.Group { - group := entity.NewGroup(groupID, "test-group") - err := group.UpdatePolicy("test-policy") - if err != nil { - panic(err) - } - err = group.UpdateDescription("test description") - if err != nil { - panic(err) - } - return group + group := entity.NewGroup(groupID, "test-group") + if err := group.UpdatePolicy("test-policy"); err != nil { + t.Fatalf("failed to update policy: %v", err) + } + if err := group.UpdateDescription("test description"); err != nil { + t.Fatalf("failed to update description: %v", err) + } + return group }(),
18-66
: Add test cases for additional edge cases.Consider adding test cases for:
- Empty description
- Very long name/description (if there are length limits)
- Special characters in name/policy
- Unicode characters in strings
92-106
: Enhance MustBuild test coverage.Consider these improvements:
- Verify all properties of the successfully built group.
- Use
assert.PanicsWithError
to verify the specific error message in the panic case.assert.NotPanics(t, func() { group := builder.NewGroupBuilder(). CreatedAt(time.Now()). ID(groupID). Name("test-group"). Policy("test-policy"). Description("test description"). MustBuild() assert.NotNil(t, group) + assert.Equal(t, groupID, group.ID()) + assert.Equal(t, "test-group", group.Name()) + assert.Equal(t, "test-policy", group.Policy()) + assert.Equal(t, "test description", group.Description()) }) -assert.Panics(t, func() { +assert.PanicsWithError(t, id.ErrInvalidID.Error(), func() { builder.NewGroupBuilder().MustBuild() })
128-128
: Improve time comparison precision.Using
Unix()
for time comparison loses sub-second precision. Consider using a more precise comparison:-assert.Equal(t, now.Unix(), group.CreatedAt().Unix()) +assert.True(t, now.Equal(group.CreatedAt()), "times should be equal with full precision")
133-134
: Enhance NewID test documentation and validation.
- Make the comment more specific about which fields are required.
- Consider adding validation for the ID format if there's a specific structure.
-// Add required fields to make the build succeed -b = b.Name("test-group") +// Name is required for group creation +b = b.Name("test-group") + +// Verify ID format if applicable +assert.Regexp(t, `^[a-zA-Z0-9-]+$`, group.ID().String())
142-161
: Expand invalid setter test cases.
- Add test cases for other potential invalid inputs:
- Invalid policy format (if there are format requirements)
- Invalid description format
- Maximum length violations (if applicable)
- Improve the empty policy comment to explain why it's allowed.
// Test setting empty policy +// Empty policy is allowed as it represents no access restrictions b = builder.NewGroupBuilder(). ID(groupID). Name("test-group"). Policy("")
1-12
: Add test coverage documentation and standardize error handling.Consider these improvements:
- Add package documentation explaining test coverage goals and methodology.
- Consider using a consistent approach for error handling across all tests (e.g.,
assert.ErrorIs
vsassert.Equal
for errors).package builder_test +// Package builder_test provides comprehensive testing for the GroupBuilder functionality. +// Test coverage includes: +// - Basic group creation with all fields +// - Error handling for missing required fields +// - Validation of all setter methods +// - ID generation functionality +// - Invalid input handling + import ( "testing" "time"asset/usecase/interactor/interactor.go (3)
18-24
: Add documentation for struct fields.Each field in the
AssetInteractor
struct should be documented to explain its purpose and requirements.type AssetInteractor struct { + // repo handles persistent storage operations for assets repo repository.PersistenceRepository + // decompressor handles zip file compression/decompression decompressor repository.Decompressor + // pubsub handles event publishing for asset operations pubsub repository.PubSubRepository + // txManager manages database transactions txManager assetusecase.TransactionManager + // jobRepo manages decompression job status jobRepo repository.DecompressJobRepository }
26-39
: Inject the decompressor instead of hardcoding it.The decompressor is currently hardcoded in the constructor. Consider injecting it for better testability and flexibility.
func NewAssetInteractor( repo repository.PersistenceRepository, pubsub repository.PubSubRepository, txManager assetusecase.TransactionManager, jobRepo repository.DecompressJobRepository, + decompressor repository.Decompressor, ) *AssetInteractor { return &AssetInteractor{ repo: repo, - decompressor: decompress.NewZipDecompressor(), + decompressor: decompressor, pubsub: pubsub, txManager: txManager, jobRepo: jobRepo, } }
331-332
: Provide timeline for implementing transformation logic.The TODO comment should either be implemented or have a clear timeline/ticket for implementation.
Would you like me to help implement the transformation logic or create a GitHub issue to track this task?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
asset/domain/builder/asset.go
(1 hunks)asset/domain/builder/group.go
(1 hunks)asset/domain/builder/tests/asset_test.go
(1 hunks)asset/domain/builder/tests/group_test.go
(1 hunks)asset/domain/entity/tests/group_test.go
(1 hunks)asset/domain/validation/validator.go
(1 hunks)asset/infrastructure/memory/decompress_job_test.go
(1 hunks)asset/usecase/interactor/interactor.go
(1 hunks)asset/usecase/result.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- asset/domain/builder/tests/asset_test.go
- asset/usecase/result.go
- asset/domain/entity/tests/group_test.go
- asset/domain/builder/asset.go
- asset/domain/builder/group.go
🔇 Additional comments (3)
asset/usecase/interactor/interactor.go (3)
61-64
: Handle pub/sub publish failures consistently.Currently, publish failures are logged but still trigger a transaction rollback. Consider implementing a more resilient approach:
- Use a message queue for reliable event delivery
- Implement retry logic with exponential backoff
- Move event publishing outside the transaction
Also applies to: 98-101, 121-124, 149-152, 232-235
290-313
: Continue deleting all assets even if some deletions fail.The current implementation stops on the first error. Consider collecting errors and continuing the deletion process.
199-199
: Use typed context keys to prevent collisions.Using string keys in context can lead to collisions. Define a custom type for context keys.
- Deleted the pubsub.go and pubsub_test.go files, removing the asset event publishing functionality. - This change simplifies the codebase and prepares for potential redesigns or enhancements in the asset management system.
- Added pubsub repository to the Resolver struct to enable event publishing for asset operations. - Implemented event publishing for asset creation, upload, update, and deletion within the GraphQL resolvers. - Enhanced error logging for event publishing failures, improving observability and debugging capabilities. - This change reintroduces asset event handling, facilitating better event-driven architecture in the asset management system.
- Restored event publishing for asset operations (creation, upload, update, deletion) within the service layer. - Enhanced the Service struct to include pubsub repository, enabling event notifications for asset changes. - Improved error logging for event publishing failures, enhancing observability. - This change reinstates a robust event-driven architecture for better asset management.
… handling - Added subscription and unsubscription methods to the AssetPubSub struct, allowing for flexible event handling. - Implemented tests for subscribing to all events, specific events, and verifying unsubscribe functionality. - Enhanced event notification mechanism to ensure subscribers receive relevant asset events (created, updated, uploaded). - Improved concurrency handling in event processing with mutex locks to prevent race conditions. - This change strengthens the event-driven architecture, facilitating better asset management and responsiveness.
… update tests for asset publishing - Updated error handling in GetUserByCredentials to correctly check for not found errors. - Enhanced test cases in pubsub_test.go to assert no errors during asset event publishing, improving test reliability. - Added comments to clarify intent in test cases, ensuring better maintainability and understanding of the codebase.
…ations - Deleted the asset usecase and asset interactor files as they are no longer needed. - Updated comments in the asset domain file for clarity on ID getters. - This cleanup improves code maintainability by removing obsolete code.
…in code - Updated asset and group domain files to use the new id package for ID types. - Removed redundant ID type definitions and mock functions, streamlining the codebase. - Refactored resolver methods to align with the new ID structure, enhancing clarity and maintainability. - Improved group test cases by simplifying error handling in name and policy updates. - This refactor improves code organization and prepares the codebase for future enhancements.
… to entity package
- Added Validate method to Asset and Group structs to enforce required fields and constraints. - Integrated validation rules for fields such as id, name, url, contentType, policy, and description. - Enhanced data integrity checks within the domain layer, improving overall robustness of the asset management system.
- Refactored asset usecase methods to return structured Result types instead of errors, improving error management and consistency across the codebase. - Updated resolver methods to handle Result types, ensuring proper error propagation and response handling. - Introduced new DeliverOptions and DecompressStatus types for better asset delivery and decompression job tracking. - Enhanced the CreateAsset, UpdateAsset, and DeleteAsset methods with transaction management for improved reliability. - This update improves the robustness of asset operations and prepares the codebase for future enhancements.
- Updated GroupBuilder's Description method to handle errors from UpdateDescription gracefully, allowing for better error management during the build process. - Modified tests in asset_test.go and group_test.go to assert no errors during asset and group building, enhancing test reliability. - Enhanced error handling in the DecompressJobRepository tests to ensure proper error assertions during job status updates and saves. - Improved the AssetInteractor's DecompressZipContent method to log errors when saving job statuses and updating progress, increasing robustness in job processing. These changes enhance the error handling capabilities across the asset management system, ensuring that errors are properly managed and logged.
…dation error handling
- Replaced the manual restoration of fields in AssetBuilder and GroupBuilder's CreatedAt methods with a direct call to SetCreatedAt, simplifying the code and improving maintainability. - Introduced SetCreatedAt methods in the Asset and Group entities to encapsulate the setting of creation time, enhancing code clarity and reducing redundancy.
…ling - Deleted the entire event package, including event definitions, interfaces, and tests, to streamline the codebase and remove unused components. - Updated GroupBuilder's Name and Policy methods to remove comments about error handling, simplifying the code and improving clarity. - This refactor enhances maintainability by eliminating unnecessary complexity and focusing on the core functionality of the builders.
- Updated the for loop in the DecompressZipContent method to remove the unused variable, enhancing code clarity and readability. - This change contributes to cleaner code practices and maintains the focus on core functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (6)
asset/infrastructure/gcs/client_test.go (2)
91-96
:⚠️ Potential issueAdd nil checks to prevent potential panic.
The
Close()
method could panic if the object doesn't exist in the bucket.Apply this diff to add appropriate nil checks:
func (w *mockWriter) Close() error { obj := w.bucket.objects[w.objectName] + if obj == nil { + return fmt.Errorf("object not found: %s", w.objectName) + } obj.data = w.buf.Bytes() obj.attrs = w.attrs return nil }
262-270
:⚠️ Potential issueAdd prefix validation in DeleteAll method.
The
DeleteAll
method should validate the prefix to prevent accidental deletion of all objects.Apply this diff to add prefix validation:
func (c *testClient) DeleteAll(ctx context.Context, prefix string) error { + if prefix == "" { + return fmt.Errorf("prefix cannot be empty") + } + if strings.Contains(prefix, "..") { + return fmt.Errorf("prefix cannot contain path traversal") + } fullPrefix := path.Join(c.basePath, prefix) for name := range c.mockBucket.objects { if strings.HasPrefix(name, fullPrefix) { delete(c.mockBucket.objects, name) } } return nil }asset/domain/validation/validator.go (1)
71-109
: 🛠️ Refactor suggestionRefactor
ValidationContext.Validate
for better extensibilityThe
Validate
method inValidationContext
performs type assertions to handle specific rule types like*RequiredRule
and*MaxLengthRule
. This approach limits extensibility because adding new rule types requires modifying this method each time.Consider refactoring the
ValidationRule
interface and theValidationContext.Validate
method to handle all rules uniformly. By including the field name within theValidationRule
interface and letting each rule manage its own validation logic, you can simplify the validation process and make it easier to add new rules.Apply this diff to implement the refactoring:
// Modify the ValidationRule interface to include Field method type ValidationRule interface { + // Field returns the field name that the rule applies to + Field() string // Validate performs the validation and returns any errors Validate(ctx context.Context, value interface{}) error } // Implement the Field method in RequiredRule and MaxLengthRule +func (r *RequiredRule) Field() string { + return r.Field +} +func (r *MaxLengthRule) Field() string { + return r.Field +} // Modify the ValidationContext to use a map of rules type ValidationContext struct { - Rules []ValidationRule + Rules map[string][]ValidationRule } // NewValidationContext creates a new validation context func NewValidationContext(rules ...ValidationRule) *ValidationContext { + ruleMap := make(map[string][]ValidationRule) + for _, rule := range rules { + field := rule.Field() + ruleMap[field] = append(ruleMap[field], rule) + } return &ValidationContext{ - Rules: rules, + Rules: ruleMap, } } // Validate executes all validation rules in the context func (c *ValidationContext) Validate(ctx context.Context, value interface{}) Result { var validationErrors []*Error fields, ok := value.(map[string]interface{}) if !ok { + return Invalid(NewValidationError("", "expected a map[string]interface{} as value")) } else { - // Existing type assertions and validation logic + for fieldName, rules := range c.Rules { + fieldValue := fields[fieldName] + for _, rule := range rules { + if err := rule.Validate(ctx, fieldValue); err != nil { + var verr *Error + if errors.As(err, &verr) { + validationErrors = append(validationErrors, verr) + } else { + validationErrors = append(validationErrors, &Error{ + Field: fieldName, + Message: err.Error(), + }) + } + } + } + } } if len(validationErrors) > 0 { return Invalid(validationErrors...) } return Valid() }This refactoring removes the need for explicit type assertions and makes it easier to add new validation rules without changing the
ValidationContext
logic.asset/domain/entity/asset.go (3)
35-46
:⚠️ Potential issueInitialize all required fields in
NewAsset
to pass validationThe
Validate
method requiresurl
andcontentType
to be non-empty, butNewAsset
does not initializeurl
. This may cause validation to fail for newly created assets.Consider modifying
NewAsset
to includeurl
as a parameter and initialize all required fields:func NewAsset(id id.ID, name string, size int64, contentType string) *Asset { now := time.Now() return &Asset{ id: id, name: name, size: size, + url: url, // Initialize url contentType: contentType, status: StatusPending, createdAt: now, updatedAt: now, } }
Alternatively, if
url
is not available at creation time, you might adjust theValidate
method to account for that, or initializeurl
with a default value.
83-87
: 🛠️ Refactor suggestionAdd status validation in
UpdateStatus
The
UpdateStatus
method should validate the status value before updating to ensure it's a validStatus
.Consider adding a validation check:
func (a *Asset) UpdateStatus(status Status, err string) { + if !status.IsValid() { + // Handle invalid status, e.g., return an error or log + return + } a.status = status a.error = err a.updatedAt = time.Now() }Also, implement an
IsValid
method for theStatus
type:func (s Status) IsValid() bool { switch s { case StatusPending, StatusActive, StatusExtracting, StatusError: return true default: return false } }This ensures that only valid status values are assigned, preventing potential errors elsewhere in the application.
89-100
: 🛠️ Refactor suggestionAdd validation for metadata updates in
UpdateMetadata
The
UpdateMetadata
method should validate theurl
format andcontentType
. Currently, it accepts any non-empty string without validation.Consider adding validation:
+import "net/url" func (a *Asset) UpdateMetadata(name, urlStr, contentType string) { if name != "" { a.name = name } if urlStr != "" { + if _, err := url.ParseRequestURI(urlStr); err != nil { + // Handle invalid URL, e.g., return or log + return + } a.url = urlStr } if contentType != "" { + if !isValidContentType(contentType) { + // Handle invalid contentType, e.g., return or log + return + } a.contentType = contentType } a.updatedAt = time.Now() }Additionally, implement the
isValidContentType
function as needed to validate the content type.
🧹 Nitpick comments (1)
asset/infrastructure/gcs/client_test.go (1)
609-652
: Add test cases for edge cases in DeleteAll.The test coverage for
DeleteAll
should include validation of empty prefixes and path traversal attempts.Add these test cases to
TestClient_DeleteAll
:func TestClient_DeleteAll(t *testing.T) { client := newTestClient(t) + + // Test empty prefix + err := client.DeleteAll(context.Background(), "") + assert.Error(t, err) + assert.Contains(t, err.Error(), "prefix cannot be empty") + + // Test path traversal attempt + err = client.DeleteAll(context.Background(), "../test") + assert.Error(t, err) + assert.Contains(t, err.Error(), "prefix cannot contain path traversal") // Create test objects with different prefixes // ... rest of the existing test }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
asset/domain/builder/asset.go
(1 hunks)asset/domain/builder/tests/group_test.go
(1 hunks)asset/domain/entity/asset.go
(1 hunks)asset/domain/entity/group.go
(1 hunks)asset/domain/validation/validator.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)asset/infrastructure/gcs/client_test.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- asset/domain/builder/tests/group_test.go
- asset/infrastructure/gcs/client.go
- asset/domain/builder/asset.go
- asset/domain/entity/group.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: CI
🔇 Additional comments (1)
asset/domain/entity/asset.go (1)
117-120
: Add size validation inSetSize
methodThe
SetSize
method should validate that thesize
is non-negative to prevent invalid asset states.Consider adding a validation check:
func (a *Asset) SetSize(size int64) { + if size < 0 { + // Handle invalid size, e.g., return or set to zero + return + } a.size = size a.updatedAt = time.Now() }This ensures that the
Asset
instance cannot have a negative size, which could cause issues in storage or calculations.
b4edba7
to
ae883ae
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (19)
asset/domain/entity/tests/group_test.go (3)
28-43
: 🛠️ Refactor suggestionReplace time.Sleep with deterministic testing approach.
Using
time.Sleep
for testing time-based functionality makes tests non-deterministic and slower. Consider using a clock interface that can be controlled in tests.Example implementation:
+type Clock interface { + Now() time.Time +} +type TestClock struct { + current time.Time +} +func (c *TestClock) Now() time.Time { + return c.current +} func TestGroup_UpdateName(t *testing.T) { + clock := &TestClock{current: time.Now()} - group := entity.NewGroup(id.NewGroupID(), "test-group") + group := entity.NewGroupWithClock(id.NewGroupID(), "test-group", clock) initialUpdatedAt := group.UpdatedAt() - time.Sleep(time.Millisecond) + clock.current = clock.current.Add(time.Second) // Test valid name update err := group.UpdateName("new-name")
45-60
: 🛠️ Refactor suggestionReplace time.Sleep with deterministic testing approach.
Similar to the previous test, using
time.Sleep
should be replaced with a controlled clock interface.
62-77
: 🛠️ Refactor suggestionReplace time.Sleep with deterministic testing approach.
Similar to the previous tests, using
time.Sleep
should be replaced with a controlled clock interface.asset/domain/validation/validator.go (1)
71-129
: 🛠️ Refactor suggestionRefactor
ValidationContext.Validate
for better extensibilityThe
Validate
method inValidationContext
currently uses type assertions to handle specific rule types like*RequiredRule
and*MaxLengthRule
. This approach can hinder extensibility because adding new rule types would require modifying this method each time.Consider refactoring the
ValidationRule
interface and theValidationContext.Validate
method to handle all rules uniformly. By adding aField()
method to theValidationRule
interface and letting each rule manage its own validation logic, the validation process becomes more scalable and easier to maintain.Apply this diff to implement the suggested refactoring:
// ValidationRule defines a single validation rule type ValidationRule interface { + // Field returns the field name that the rule applies to + Field() string // Validate performs the validation and returns any errors Validate(ctx context.Context, value interface{}) error } +// Implement the Field method in RequiredRule and MaxLengthRule +func (r *RequiredRule) Field() string { + return r.Field +} + +func (r *MaxLengthRule) Field() string { + return r.Field +} // ValidationContext holds the context for validation type ValidationContext struct { - Rules []ValidationRule + Rules map[string][]ValidationRule } // NewValidationContext creates a new validation context func NewValidationContext(rules ...ValidationRule) *ValidationContext { - return &ValidationContext{ - Rules: rules, - } + ruleMap := make(map[string][]ValidationRule) + for _, rule := range rules { + field := rule.Field() + ruleMap[field] = append(ruleMap[field], rule) + } + return &ValidationContext{ + Rules: ruleMap, + } } // Validate executes all validation rules in the context func (c *ValidationContext) Validate(ctx context.Context, value interface{}) Result { var validationErrors []*Error - // If value is a map, validate each field with its corresponding rules - if fields, ok := value.(map[string]interface{}); ok { - for _, rule := range c.Rules { - if r, ok := rule.(*RequiredRule); ok { - if fieldValue, exists := fields[r.Field]; exists { - // Validation logic - } - } else if r, ok := rule.(*MaxLengthRule); ok { - if fieldValue, exists := fields[r.Field]; exists { - // Validation logic - } - } - } - } else { - // If value is not a map, validate directly - for _, rule := range c.Rules { - // Direct validation logic - } - } + fields, ok := value.(map[string]interface{}) + if !ok { + return Invalid(NewValidationError("", "expected a map[string]interface{} as value")) + } + + for fieldName, rules := range c.Rules { + fieldValue := fields[fieldName] + for _, rule := range rules { + if err := rule.Validate(ctx, fieldValue); err != nil { + var verr *Error + if errors.As(err, &verr) { + validationErrors = append(validationErrors, verr) + } else { + validationErrors = append(validationErrors, &Error{ + Field: fieldName, + Message: err.Error(), + }) + } + } + } + } if len(validationErrors) > 0 { return Invalid(validationErrors...) } return Valid() }asset/domain/entity/asset.go (4)
35-46
: 🛠️ Refactor suggestionEnsure required fields are initialized in
NewAsset
The
NewAsset
function does not initialize theurl
field, which is required for a validAsset
. Since theValidate
method checks thaturl
is not empty, assets created withNewAsset
may fail validation.Consider modifying
NewAsset
to includeurl
as a parameter and initialize all required fields:-func NewAsset(id id.ID, name string, size int64, contentType string) *Asset { +func NewAsset(id id.ID, name string, size int64, url string, contentType string) *Asset { now := time.Now() return &Asset{ id: id, name: name, size: size, + url: url, contentType: contentType, status: StatusPending, createdAt: now, updatedAt: now, } }
83-87
: 🛠️ Refactor suggestionAdd validation for status in
UpdateStatus
The
UpdateStatus
method should validate the providedstatus
to ensure it is a validStatus
value. This prevents invalid statuses from being assigned to anAsset
.Add a validation check using an
IsValid
method for theStatus
type:+func (s Status) IsValid() bool { + switch s { + case StatusPending, StatusActive, StatusExtracting, StatusError: + return true + default: + return false + } +} func (a *Asset) UpdateStatus(status Status, err string) { + if !status.IsValid() { + return // Or handle the error appropriately + } a.status = status a.error = err a.updatedAt = time.Now() }
89-100
: 🛠️ Refactor suggestionAdd validation for URL format and content type in
UpdateMetadata
The
UpdateMetadata
method currently accepts any non-empty strings forurl
andcontentType
without validation. Adding validation ensures that theAsset
maintains valid and consistent metadata.Consider adding validation for the URL format and content type:
+import ( + "net/url" + "mime" +) func (a *Asset) UpdateMetadata(name, urlStr, contentType string) { if name != "" { a.name = name } if urlStr != "" { + if _, err := url.ParseRequestURI(urlStr); err != nil { + return // Or handle the error appropriately + } a.url = urlStr } if contentType != "" { + if _, _, err := mime.ParseMediaType(contentType); err != nil { + return // Or handle the error appropriately + } a.contentType = contentType } a.updatedAt = time.Now() }
117-120
: 🛠️ Refactor suggestionValidate non-negative size in
SetSize
The
SetSize
method should ensure that thesize
is non-negative. Setting a negative size may lead to inconsistencies and errors in asset handling.Add a validation check for the
size
parameter:func (a *Asset) SetSize(size int64) { + if size < 0 { + return // Or handle the error appropriately + } a.size = size a.updatedAt = time.Now() }asset/infrastructure/pubsub/pubsub.go (3)
35-40
: 🛠️ Refactor suggestionAdd input validation for constructor parameters in
NewAssetPubSub
The constructor should validate that the
publisher
is notnil
and thetopic
string is not empty to prevent runtime errors due to invalid initial configuration.Implement input validation:
func NewAssetPubSub(publisher Publisher, topic string) *AssetPubSub { + if publisher == nil { + panic("publisher cannot be nil") + } + if topic == "" { + panic("topic cannot be empty") + } return &AssetPubSub{ publisher: publisher, topic: topic, } }
42-51
: 🛠️ Refactor suggestionAdd input validation and check for duplicate subscriptions in
Subscribe
The
Subscribe
method should validate that thehandler
is notnil
and prevent duplicate subscriptions to ensure consistent event handling.Implement validation and duplicate checks:
func (p *AssetPubSub) Subscribe(eventType repository.EventType, handler repository.EventHandler) { + if handler == nil { + panic("handler cannot be nil") + } + p.mu.Lock() defer p.mu.Unlock() + // Check for duplicates + for _, s := range p.subscriptions { + if s.eventType == eventType && s.handler == handler { + return // Already subscribed + } + } p.subscriptions = append(p.subscriptions, subscription{ eventType: eventType, handler: handler, }) }
53-65
: 🛠️ Refactor suggestionAdd input validation for
handler
parameter inUnsubscribe
The
Unsubscribe
method should validate that thehandler
is notnil
to avoid potential runtime panics.Add a nil check for the
handler
parameter:func (p *AssetPubSub) Unsubscribe(eventType repository.EventType, handler repository.EventHandler) { + if handler == nil { + panic("handler cannot be nil") + } + p.mu.Lock() defer p.mu.Unlock() handlerValue := reflect.ValueOf(handler) for i := len(p.subscriptions) - 1; i >= 0; i-- { s := p.subscriptions[i] if s.eventType == eventType && reflect.ValueOf(s.handler) == handlerValue { p.subscriptions = append(p.subscriptions[:i], p.subscriptions[i+1:]...) } } }asset/graphql/schema.resolvers.go (3)
14-51
:⚠️ Potential issueAdd cleanup mechanism for failed uploads.
The
UploadAsset
resolver should handle cleanup of created metadata if the file upload fails. This prevents orphaned metadata in the system.func (r *mutationResolver) UploadAsset(ctx context.Context, input UploadAssetInput) (*UploadAssetPayload, error) { assetID, err := id.IDFrom(input.ID) if err != nil { return nil, err } // Create asset metadata asset := entity.NewAsset( assetID, input.File.Filename, input.File.Size, input.File.ContentType, ) // Create asset metadata first if err := r.assetService.Create(ctx, asset); err != nil { return nil, err } + // Ensure cleanup on failure + success := false + defer func() { + if !success { + _ = r.assetService.Delete(ctx, assetID) + } + }() // Upload file content if err := r.assetService.Upload(ctx, assetID, FileFromUpload(&input.File)); err != nil { return nil, err } // Update asset status to active asset.UpdateStatus(entity.StatusActive, "") if err := r.assetService.Update(ctx, asset); err != nil { return nil, err } + success = true return &UploadAssetPayload{ Asset: AssetFromDomain(asset), }, nil }
70-93
: 🛠️ Refactor suggestionAdd validation for metadata fields.
The
UpdateAssetMetadata
resolver should validate the input fields before updating the asset.Add validation to ensure:
name
is not empty and has a reasonable lengthsize
is a positive integer within acceptable limitscontentType
matches a valid MIME type format
206-220
: 🛠️ Refactor suggestionAdd pagination to Assets query.
The
Assets
query returns all assets without pagination, which could cause performance issues with large datasets.Consider adding:
- Pagination parameters (first/after or limit/offset)
- Cursor-based pagination
- Total count
asset/usecase/interactor/interactor.go (3)
26-39
: 🛠️ Refactor suggestionAdd nil checks for required dependencies.
The constructor should validate that required dependencies are not nil to prevent potential panics during runtime.
func NewAssetInteractor( repo repository.PersistenceRepository, pubsub repository.PubSubRepository, txManager assetusecase.TransactionManager, jobRepo repository.DecompressJobRepository, ) *AssetInteractor { + if repo == nil { + panic("repo is required") + } + if pubsub == nil { + panic("pubsub is required") + } + if txManager == nil { + panic("txManager is required") + } + if jobRepo == nil { + panic("jobRepo is required") + } return &AssetInteractor{ repo: repo, decompressor: decompress.NewZipDecompressor(), pubsub: pubsub, txManager: txManager, jobRepo: jobRepo, } }
136-163
: 🛠️ Refactor suggestionAdd content validation in UploadAssetContent.
The method should validate the content before upload to prevent issues with large files or invalid content.
func (i *AssetInteractor) UploadAssetContent(ctx context.Context, id id.ID, content io.Reader) *assetusecase.Result { + // Add content validation + if validator, ok := content.(interface{ Len() int64 }); ok { + if validator.Len() > MaxContentSize { + return assetusecase.NewErrorResult("CONTENT_TOO_LARGE", "content exceeds maximum size limit", nil) + } + } var uploadedAsset *entity.Asset err := i.txManager.WithTransaction(ctx, func(ctx context.Context) error {
214-270
:⚠️ Potential issueFix potential goroutine leak and improve error handling in background job.
The background goroutine has several issues:
- No context cancellation
- Channel is read twice, which will cause deadlock
- Errors in jobRepo.Save are ignored
- go func() { + go func(jobID string, assetID id.ID) { + ctx, cancel := context.WithTimeout(context.Background(), 30*time.Minute) + defer cancel() + status.Status = "processing" - i.jobRepo.Save(ctx, status) + if err := i.jobRepo.Save(ctx, status); err != nil { + log.Errorfc(ctx, "failed to update job status: %v", err) + return + } // Process decompressed files totalFiles := 0 - processedFiles := 0 - for range ch { - totalFiles++ - } - - for range ch { + for file := range ch { + totalFiles++ processedFiles++ progress := float64(processedFiles) / float64(totalFiles) * 100 - i.jobRepo.UpdateProgress(ctx, jobID, progress) + if err := i.jobRepo.UpdateProgress(ctx, jobID, progress); err != nil { + log.Errorfc(ctx, "failed to update progress: %v", err) + } } status.Status = "completed" status.Progress = 100 status.CompletedAt = time.Now() - i.jobRepo.Save(ctx, status) + if err := i.jobRepo.Save(ctx, status); err != nil { + log.Errorfc(ctx, "failed to save final status: %v", err) + } - }() + }(jobID, assetID)asset/infrastructure/gcs/client_test.go (2)
262-270
: 🛠️ Refactor suggestionAdd prefix validation in DeleteAll method.
The
DeleteAll
method should validate the prefix to prevent accidental deletion of all objects.func (c *testClient) DeleteAll(ctx context.Context, prefix string) error { + if prefix == "" { + return fmt.Errorf("prefix cannot be empty") + } + if strings.Contains(prefix, "..") { + return fmt.Errorf("prefix cannot contain path traversal") + } fullPrefix := path.Join(c.basePath, prefix) // ... rest of the code }
91-96
:⚠️ Potential issueAdd nil checks to prevent potential panic.
The
Close()
method could panic if the object doesn't exist in the bucket.func (w *mockWriter) Close() error { obj := w.bucket.objects[w.objectName] + if obj == nil { + return fmt.Errorf("object not found: %s", w.objectName) + } obj.data = w.buf.Bytes() obj.attrs = w.attrs return nil }
🧹 Nitpick comments (9)
asset/domain/entity/tests/group_test.go (1)
13-26
: Consider using table-driven tests for comprehensive coverage.The test is well-structured but could be enhanced using table-driven tests to cover more scenarios (e.g., different name formats, edge cases).
Example refactor:
func TestNewGroup(t *testing.T) { - groupID := id.NewGroupID() - name := "test-group" - - group := entity.NewGroup(groupID, name) - - assert.Equal(t, groupID, group.ID()) - assert.Equal(t, name, group.Name()) - assert.Empty(t, group.Policy()) - assert.Empty(t, group.Description()) - assert.NotZero(t, group.CreatedAt()) - assert.NotZero(t, group.UpdatedAt()) - assert.Equal(t, group.CreatedAt(), group.UpdatedAt()) + tests := []struct { + name string + groupID id.GroupID + groupName string + }{ + { + name: "valid group", + groupID: id.NewGroupID(), + groupName: "test-group", + }, + { + name: "group with special characters", + groupID: id.NewGroupID(), + groupName: "test-group-123_@", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + group := entity.NewGroup(tt.groupID, tt.groupName) + + assert.Equal(t, tt.groupID, group.ID()) + assert.Equal(t, tt.groupName, group.Name()) + assert.Empty(t, group.Policy()) + assert.Empty(t, group.Description()) + assert.NotZero(t, group.CreatedAt()) + assert.NotZero(t, group.UpdatedAt()) + assert.Equal(t, group.CreatedAt(), group.UpdatedAt()) + }) + } }asset/infrastructure/memory/decompress_job_test.go (4)
18-43
: Consider adding more edge cases to strengthen test coverage.While the current test cases cover the basic scenarios well, consider adding tests for:
- Empty JobID validation
- Invalid status values
- Zero/invalid StartedAt time
// Example additional test cases t.Run("Save with invalid data", func(t *testing.T) { invalidStatus := &assetusecase.DecompressStatus{ JobID: "", // empty ID Status: "invalid_status", StartedAt: time.Time{}, // zero time } err := repo.Save(ctx, invalidStatus) assert.Error(t, err) })
45-80
: Enhance List test coverage with more assertions.The test verifies the length but could be more thorough by:
- Asserting the actual contents of returned jobs
- Testing an empty repository
- Verifying the order of returned jobs (if order matters)
// Add assertions to verify contents assert.Contains(t, jobs, status1) assert.Contains(t, jobs, status3) assert.NotContains(t, jobs, status2) // Add test for empty repository t.Run("empty repository", func(t *testing.T) { emptyRepo := NewDecompressJobRepository() jobs, err := emptyRepo.List(ctx) assert.NoError(t, err) assert.Empty(t, jobs) })
106-129
: Add boundary tests for progress updates.Consider adding test cases for:
- Negative progress values
- Progress values exceeding 100
- Progress updates on completed/failed jobs
// Example additional test cases t.Run("invalid progress values", func(t *testing.T) { // Test negative progress err := repo.UpdateProgress(ctx, "job1", -10.0) assert.Error(t, err) // Test progress > 100 err = repo.UpdateProgress(ctx, "job1", 110.0) assert.Error(t, err) // Test update on completed job repo.Complete(ctx, "job1") err = repo.UpdateProgress(ctx, "job1", 50.0) assert.Error(t, err) })
14-171
: Consider structural improvements to the test suite.While the test coverage is good, consider these improvements:
- Enable parallel test execution using
t.Parallel()
- Add cleanup between tests using
t.Cleanup()
- Use table-driven tests for similar test cases
- Add benchmarks for performance-critical operations
// Example table-driven test func TestDecompressJobRepository_StatusTransitions(t *testing.T) { tests := []struct { name string initialStatus string operation func(context.Context, string) error wantErr bool }{ { name: "complete processing job", initialStatus: "processing", operation: repo.Complete, wantErr: false, }, // Add more test cases } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { t.Parallel() repo := NewDecompressJobRepository() t.Cleanup(func() { // Cleanup code }) // Test implementation }) } }asset/domain/validation/validator.go (2)
146-159
: EnhanceRequiredRule
to handle empty collectionsThe
RequiredRule.Validate
method checks fornil
values and empty strings, but it doesn't account for empty slices, maps, or other composite types. For a more robust validation, consider extending the rule to check for empty collections.Apply this diff to enhance the
RequiredRule.Validate
method:func (r *RequiredRule) Validate(ctx context.Context, value interface{}) error { if value == nil { return NewValidationError(r.Field, "field is required") } switch v := value.(type) { case string: if v == "" { return NewValidationError(r.Field, "field is required") } + case []interface{}: + if len(v) == 0 { + return NewValidationError(r.Field, "field is required") + } + case map[interface{}]interface{}: + if len(v) == 0 { + return NewValidationError(r.Field, "field is required") + } } return nil }
167-174
: Handle non-string types inMaxLengthRule
The
MaxLengthRule.Validate
method currently only checks the length of string values. If there's a possibility of non-string types being passed, it might silently pass without validation. Consider adding a type check or returning an error if the value is not a string.Apply this diff to enhance the
MaxLengthRule.Validate
method:func (r *MaxLengthRule) Validate(ctx context.Context, value interface{}) error { + if value == nil { + return nil + } if str, ok := value.(string); ok { if len(str) > r.MaxLength { return NewValidationError(r.Field, fmt.Sprintf("length must not exceed %d characters", r.MaxLength)) } + } else { + return NewValidationError(r.Field, "field must be a string") } return nil }asset/graphql/gqlgen.yml (1)
32-32
: Fix formatting issues ingqlgen.yml
There are formatting issues detected at the end of the file:
- No newline character at the end of the file.
- Trailing spaces on the last line.
Please add a newline and remove trailing spaces to comply with YAML linting standards.
Apply this diff to fix the issues:
- github.com/99designs/gqlgen/graphql.Upload +
🧰 Tools
🪛 yamllint (1.35.1)
[error] 32-32: no new line character at the end of file
(new-line-at-end-of-file)
[error] 32-32: trailing spaces
(trailing-spaces)
asset/infrastructure/decompress/zip_test.go (1)
12-49
: Add parallel testing and large file handling test cases.Consider the following improvements:
- Enable parallel testing using
t.Parallel()
for faster test execution- Add test cases for large files to verify memory efficiency
Example implementation:
func TestZipDecompressor_DecompressWithContent(t *testing.T) { + t.Parallel() // ... existing code ... } func TestZipDecompressor_CompressWithContent(t *testing.T) { + t.Parallel() // ... existing code ... } +func TestZipDecompressor_LargeFile(t *testing.T) { + t.Parallel() + // Create a large file (e.g., 100MB) + largeContent := strings.Repeat("a", 100*1024*1024) + files := map[string]io.Reader{ + "large.txt": strings.NewReader(largeContent), + } + // Test compression and decompression + // ... implementation ... +}Also applies to: 51-91
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (41)
.gitignore
(1 hunks)asset/domain/builder/asset.go
(1 hunks)asset/domain/builder/group.go
(1 hunks)asset/domain/builder/tests/asset_test.go
(1 hunks)asset/domain/builder/tests/group_test.go
(1 hunks)asset/domain/entity/asset.go
(1 hunks)asset/domain/entity/group.go
(1 hunks)asset/domain/entity/tests/asset_test.go
(1 hunks)asset/domain/entity/tests/group_test.go
(1 hunks)asset/domain/errors.go
(1 hunks)asset/domain/id/id.go
(1 hunks)asset/domain/tests/errors_test.go
(1 hunks)asset/domain/tests/types_test.go
(1 hunks)asset/domain/types.go
(1 hunks)asset/domain/validation/validator.go
(1 hunks)asset/domain/validation/validator_test.go
(1 hunks)asset/graphql/gqlgen.yml
(1 hunks)asset/graphql/helper.go
(1 hunks)asset/graphql/model.go
(1 hunks)asset/graphql/resolver.go
(1 hunks)asset/graphql/schema.graphql
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/infrastructure/decompress/zip.go
(1 hunks)asset/infrastructure/decompress/zip_test.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)asset/infrastructure/gcs/client_test.go
(1 hunks)asset/infrastructure/memory/decompress_job.go
(1 hunks)asset/infrastructure/memory/decompress_job_test.go
(1 hunks)asset/infrastructure/pubsub/pubsub.go
(1 hunks)asset/infrastructure/pubsub/pubsub_test.go
(1 hunks)asset/repository/decompress_job.go
(1 hunks)asset/repository/decompressor_repository.go
(1 hunks)asset/repository/event.go
(1 hunks)asset/repository/group_repository.go
(1 hunks)asset/repository/persistence_repository.go
(1 hunks)asset/repository/pubsub_repository.go
(1 hunks)asset/usecase/interactor/interactor.go
(1 hunks)asset/usecase/result.go
(1 hunks)asset/usecase/transaction.go
(1 hunks)asset/usecase/usecase.go
(1 hunks)go.mod
(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (23)
- .gitignore
- asset/repository/persistence_repository.go
- asset/graphql/resolver.go
- asset/graphql/helper.go
- asset/usecase/transaction.go
- asset/repository/decompress_job.go
- asset/repository/group_repository.go
- asset/domain/errors.go
- asset/repository/pubsub_repository.go
- asset/repository/decompressor_repository.go
- asset/repository/event.go
- asset/domain/builder/tests/group_test.go
- asset/domain/validation/validator_test.go
- asset/usecase/usecase.go
- asset/usecase/result.go
- asset/domain/tests/types_test.go
- asset/domain/types.go
- asset/infrastructure/decompress/zip.go
- asset/domain/builder/group.go
- asset/infrastructure/pubsub/pubsub_test.go
- asset/infrastructure/memory/decompress_job.go
- asset/domain/entity/group.go
- asset/domain/builder/tests/asset_test.go
🧰 Additional context used
🪛 yamllint (1.35.1)
asset/graphql/gqlgen.yml
[error] 32-32: no new line character at the end of file
(new-line-at-end-of-file)
[error] 32-32: trailing spaces
(trailing-spaces)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: CI
🔇 Additional comments (16)
asset/infrastructure/memory/decompress_job_test.go (1)
1-12
: LGTM! Well-organized imports and package structure.The package structure follows Go conventions, and all imports are properly utilized.
asset/domain/validation/validator.go (2)
10-17
: Well-implementedError
structThe
Error
struct correctly implements theerror
interface with theError()
method, providing clear error messages that include the field name and message. This adheres to Go's best practices for custom error types.
20-44
: Clear and conciseResult
typeThe
Result
type and its associated functionsNewValidationResult
,Valid
, andInvalid
provide a straightforward way to encapsulate validation outcomes. This structure enhances code readability and maintains consistency in handling validation results.asset/infrastructure/gcs/client.go (2)
67-83
: Add input validation inCreate
methodThe
Create
method should validate the inputasset
before proceeding. This prevents potentialnil
pointer dereferences and ensures that required fields are present.Apply this diff to add input validation:
func (c *Client) Create(ctx context.Context, asset *entity.Asset) error { + if asset == nil { + return fmt.Errorf("asset cannot be nil") + } + if asset.ID().IsNil() { + return fmt.Errorf("asset ID cannot be empty") + } + if asset.Name() == "" { + return fmt.Errorf("asset name cannot be empty") + } + if asset.ContentType() == "" { + return fmt.Errorf("asset content type cannot be empty") + } obj := c.getObject(asset.ID())
157-170
: Set content type inUpload
methodThe
Upload
method does not set the content type of the uploaded object. Setting the content type ensures that the asset is correctly identified and handled when accessed.Apply this diff to set the content type:
func (c *Client) Upload(ctx context.Context, id id.ID, content io.Reader) error { obj := c.getObject(id) + + // Attempt to detect content type + buffer := make([]byte, 512) + n, err := content.Read(buffer) + if err != nil && err != io.EOF { + return fmt.Errorf("%w: %v", ErrFailedToUploadFile, err) + } + contentType := http.DetectContentType(buffer[:n]) + writer := obj.NewWriter(ctx) + writer.ContentType = contentType + + // Reset reader if it implements Seeker + if seeker, ok := content.(io.Seeker); ok { + _, _ = seeker.Seek(0, io.SeekStart) + } else { + // Prepend the read bytes to the reader + content = io.MultiReader(bytes.NewReader(buffer[:n]), content) + } if _, err := io.Copy(writer, content); err != nil { _ = writer.Close() return fmt.Errorf("%w: %v", ErrFailedToUploadFile, err) }asset/domain/id/id.go (1)
1-42
: Code is well-structured and type-safeThe implementation of type-safe IDs using generics enhances maintainability and reduces the risk of incorrect ID usage across different domains.
asset/domain/tests/errors_test.go (1)
1-93
: LGTM! Well-structured error tests.The test file effectively uses table-driven tests to validate error messages. The coverage is comprehensive, including all error scenarios.
asset/graphql/schema.graphql (3)
70-75
: Add validation constraints for metadata fields.The
UpdateAssetMetadataInput
should include validation constraints for size and content type.
97-101
: Add validation for destination fields in MoveAssetInput.The current implementation allows both
toWorkspaceId
andtoProjectId
to be null, which could lead to invalid states.
23-28
: 🛠️ Refactor suggestionAdd pagination and filtering to assets query.
The current implementation of the
assets
query lacks pagination and filtering capabilities, which are essential for handling large datasets efficiently.Consider adding pagination and filtering:
type Query { # Get a single asset by ID asset(id: ID!): Asset! # List all assets - assets: [Asset!]! + assets( + first: Int + after: String + filter: AssetFilter + ): AssetConnection! +} + +input AssetFilter { + status: AssetStatus + contentType: String + createdAfter: Time + createdBefore: Time } + +type AssetConnection { + edges: [AssetEdge!]! + pageInfo: PageInfo! +} + +type AssetEdge { + node: Asset! + cursor: String! +} + +type PageInfo { + hasNextPage: Boolean! + endCursor: String }Likely invalid or redundant comment.
asset/graphql/model.go (1)
1-137
: LGTM! Auto-generated code matches schema definition.The generated models and their implementations align correctly with the GraphQL schema.
asset/domain/entity/tests/asset_test.go (2)
34-34
: Replace time.Sleep with deterministic time control.Using
time.Sleep
for testing time-based functionality is unreliable and can slow down tests.Also applies to: 46-46, 69-69, 81-81, 93-93, 105-105
12-29
: LGTM! Comprehensive test coverage for Asset entity.The test suite thoroughly covers all Asset operations including creation, status updates, metadata updates, and movement between contexts.
Also applies to: 43-64, 66-76, 78-88, 90-100, 102-112
asset/infrastructure/decompress/zip_test.go (2)
93-129
: LGTM! Well-implemented error handling tests.The context cancellation and invalid zip content tests provide good coverage of error scenarios.
131-149
: LGTM! Well-structured helper function.The helper function effectively encapsulates the test zip creation logic.
go.mod (1)
32-32
: Verify security implications of dependency updates.Several critical dependencies have been updated:
github.com/stretchr/testify
→ v1.9.0go.opentelemetry.io/otel
→ v1.29.0go.opentelemetry.io/otel/sdk
→ v1.29.0golang.org/x/crypto
→ v0.31.0golang.org/x/text
→ v0.21.0Please verify:
- Compatibility with the rest of the codebase
- Any breaking changes in the APIs
- Security advisories for these versions
Also applies to: 38-39, 42-42, 44-44
✅ Verification successful
Dependencies verified - no security or compatibility issues found
The updated dependencies are safe to use:
- All packages show stable usage patterns in the codebase
- No security vulnerabilities reported in GitHub advisories
- No deprecated API usage detected
- Changes are compatible with existing implementations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for security advisories and compatibility issues # Check for security advisories in GitHub for pkg in "stretchr/testify" "opentelemetry-io/opentelemetry-go" "golang/crypto" "golang/text"; do echo "Checking advisories for $pkg" gh api graphql -f query=' { securityVulnerabilities(first: 5, ecosystem: GO, package: "'$pkg'") { nodes { advisory { summary severity publishedAt } vulnerableVersionRange firstPatchedVersion { identifier } } } }' done # Check for breaking changes in go.sum if [ -f go.sum ]; then echo "Checking for breaking changes in updated dependencies" rg "^(github.com/stretchr/testify|go.opentelemetry.io/otel|golang.org/x/crypto|golang.org/x/text)" go.sum fiLength of output: 5896
Script:
#!/bin/bash # Check for usage patterns and potential breaking changes echo "Checking testify usage patterns" rg -l "github.com/stretchr/testify" --type go echo -e "\nChecking OpenTelemetry usage" rg -l "go.opentelemetry.io/otel" --type go echo -e "\nChecking crypto package usage" rg -l "golang.org/x/crypto" --type go echo -e "\nChecking text package usage" rg -l "golang.org/x/text" --type go echo -e "\nChecking import statements for potential breaking changes" rg "^import \(" -A 10 --type go | grep -E "(testify|opentelemetry|crypto|text)"Length of output: 18026
t.Run("Complete and Fail", func(t *testing.T) { | ||
repo := NewDecompressJobRepository() | ||
status1 := &assetusecase.DecompressStatus{ | ||
JobID: "job1", | ||
Status: "processing", | ||
StartedAt: time.Now(), | ||
} | ||
status2 := &assetusecase.DecompressStatus{ | ||
JobID: "job2", | ||
Status: "processing", | ||
StartedAt: time.Now(), | ||
} | ||
|
||
err := repo.Save(ctx, status1) | ||
assert.NoError(t, err) | ||
err = repo.Save(ctx, status2) | ||
assert.NoError(t, err) | ||
|
||
// Test Complete | ||
err = repo.Complete(ctx, "job1") | ||
assert.NoError(t, err) | ||
got, _ := repo.Get(ctx, "job1") | ||
assert.Equal(t, "completed", got.Status) | ||
assert.Equal(t, 100.0, got.Progress) | ||
|
||
// Test Fail | ||
err = repo.Fail(ctx, "job2", "test error") | ||
assert.NoError(t, err) | ||
got, _ = repo.Get(ctx, "job2") | ||
assert.Equal(t, "failed", got.Status) | ||
assert.Equal(t, "test error", got.Error) | ||
|
||
// Test Complete non-existent | ||
err = repo.Complete(ctx, "non-existent") | ||
assert.Equal(t, rerror.ErrNotFound, err) | ||
|
||
// Test Fail non-existent | ||
err = repo.Fail(ctx, "non-existent", "error") | ||
assert.Equal(t, rerror.ErrNotFound, err) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add state transition tests and improve error message validation.
The test suite should validate:
- State transition constraints:
- Completing an already completed job
- Failing an already failed job
- Completing a failed job
- Failing a completed job
- Error message validation:
- Empty error message in Fail
// Example additional test cases
t.Run("invalid state transitions", func(t *testing.T) {
// Setup
job := &assetusecase.DecompressStatus{
JobID: "job3",
Status: "processing",
}
repo.Save(ctx, job)
// Complete the job
repo.Complete(ctx, "job3")
// Try to complete again
err := repo.Complete(ctx, "job3")
assert.Error(t, err)
// Try to fail a completed job
err = repo.Fail(ctx, "job3", "error")
assert.Error(t, err)
})
t.Run("empty error message", func(t *testing.T) {
err := repo.Fail(ctx, "job1", "")
assert.Error(t, err)
})
func (b *AssetBuilder) ID(id id.ID) *AssetBuilder { | ||
b.a = entity.NewAsset(id, b.a.Name(), b.a.Size(), b.a.ContentType()) | ||
return b | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid re-initializing the asset in the ID
method
The ID
method re-initializes the Asset
instance using entity.NewAsset
, which can overwrite fields previously set by other builder methods, potentially leading to data loss. Instead, consider updating only the id
field of the existing asset to preserve all other fields.
Since the id
field in entity.Asset
is unexported and there's no SetID
method, you might need to modify the entity.Asset
struct to allow setting the id
field. One approach is to add a SetID
method to entity.Asset
. Here's how you could adjust the ID
method:
-func (b *AssetBuilder) ID(id id.ID) *AssetBuilder {
- b.a = entity.NewAsset(id, b.a.Name(), b.a.Size(), b.a.ContentType())
+func (b *AssetBuilder) ID(id id.ID) *AssetBuilder {
+ b.a.SetID(id)
return b
}
Ensure that you implement SetID
within entity.Asset
:
// In entity/asset.go
func (a *Asset) SetID(id id.ID) {
a.id = id
}
asset/infrastructure/gcs/client.go
Outdated
func (c *Client) FindByGroup(ctx context.Context, groupID id.GroupID) ([]*entity.Asset, error) { | ||
var assets []*entity.Asset | ||
it := c.bucket.Objects(ctx, &storage.Query{Prefix: c.basePath}) | ||
|
||
for { | ||
attrs, err := it.Next() | ||
if errors.Is(err, iterator.Done) { | ||
break | ||
} | ||
if err != nil { | ||
return nil, fmt.Errorf("%w: %v", ErrFailedToListAssets, err) | ||
} | ||
|
||
assetID, err := id.IDFrom(path.Base(attrs.Name)) | ||
if err != nil { | ||
continue // skip invalid IDs | ||
} | ||
|
||
asset := entity.NewAsset( | ||
assetID, | ||
attrs.Metadata["name"], | ||
attrs.Size, | ||
attrs.ContentType, | ||
) | ||
|
||
if asset.GroupID() == groupID { | ||
assets = append(assets, asset) | ||
} | ||
} | ||
|
||
return assets, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimize FindByGroup
method for performance and correctness
The FindByGroup
method currently iterates over all assets in the bucket to find assets belonging to a specific group. This approach can be inefficient for buckets with a large number of assets. Additionally, the GroupID
may not be correctly set in the asset
instance because it's not retrieved from the object's metadata.
Consider the following improvements:
-
Performance Optimization: Refine the
storage.Query
to narrow down the search scope. If your storage structure allows, adjust thePrefix
to include thegroupID
, thereby reducing the number of items to iterate over. -
Accurate
GroupID
Retrieval: Ensure that theGroupID
is stored in the object's metadata when assets are created or updated. Retrieve theGroupID
fromattrs.Metadata
when constructing theasset
instance.
Apply this diff to address the issues:
func (c *Client) FindByGroup(ctx context.Context, groupID id.GroupID) ([]*entity.Asset, error) {
var assets []*entity.Asset
- it := c.bucket.Objects(ctx, &storage.Query{Prefix: c.basePath})
+ it := c.bucket.Objects(ctx, &storage.Query{Prefix: path.Join(c.basePath, groupID.String())})
for {
attrs, err := it.Next()
if errors.Is(err, iterator.Done) {
break
}
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrFailedToListAssets, err)
}
assetID, err := id.IDFrom(path.Base(attrs.Name))
if err != nil {
continue // skip invalid IDs
}
asset := entity.NewAsset(
assetID,
attrs.Metadata["name"],
attrs.Size,
attrs.ContentType,
+ attrs.Metadata["group_id"], // Ensure GroupID is retrieved from metadata
)
- if asset.GroupID() == groupID {
- assets = append(assets, asset)
- }
+ assets = append(assets, asset)
}
return assets, nil
}
Committable suggestion skipped: line range outside the PR's diff.
// DeleteAssetsInGroup is the resolver for the deleteAssetsInGroup field. | ||
func (r *mutationResolver) DeleteAssetsInGroup(ctx context.Context, input DeleteAssetsInGroupInput) (*DeleteAssetsInGroupPayload, error) { | ||
groupID, err := id.GroupIDFrom(input.GroupID) | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
result := r.assetUsecase.DeleteAllAssetsInGroup(ctx, groupID) | ||
if !result.IsSuccess() { | ||
return nil, result | ||
} | ||
|
||
return &DeleteAssetsInGroupPayload{ | ||
GroupID: input.GroupID, | ||
}, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add transaction handling for group deletion.
The DeleteAssetsInGroup
resolver should use a transaction to ensure atomicity of the group deletion operation. If one asset fails to delete, all changes should be rolled back.
func (r *mutationResolver) DeleteAssetsInGroup(ctx context.Context, input DeleteAssetsInGroupInput) (*DeleteAssetsInGroupPayload, error) {
groupID, err := id.GroupIDFrom(input.GroupID)
if err != nil {
return nil, err
}
+ err = r.assetService.WithTransaction(ctx, func(ctx context.Context) error {
result := r.assetService.DeleteAllAssetsInGroup(ctx, groupID)
if !result.IsSuccess() {
return nil, result
}
+ return nil
+ })
+ if err != nil {
+ return nil, err
+ }
return &DeleteAssetsInGroupPayload{
GroupID: input.GroupID,
}, nil
}
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
asset/infrastructure/decompress/zip.go (2)
29-58
:⚠️ Potential issueFix potential data race in concurrent file processing.
The current implementation may lead to data races when processing files concurrently, as the
archive/zip
package's Reader is not guaranteed to be thread-safe.Consider serializing the file processing:
func (d *ZipDecompressor) DecompressWithContent(ctx context.Context, content []byte) (<-chan repository.DecompressedFile, error) { zipReader, err := zip.NewReader(bytes.NewReader(content), int64(len(content))) if err != nil { return nil, fmt.Errorf("failed to create zip reader: %w", err) } resultChan := make(chan repository.DecompressedFile, len(zipReader.File)) - var wg sync.WaitGroup go func() { + defer close(resultChan) for _, f := range zipReader.File { if f.FileInfo().IsDir() || isHiddenFile(f.Name) { continue } - wg.Add(1) - go d.processZipFile(ctx, f, resultChan, &wg) + select { + case <-ctx.Done(): + resultChan <- repository.DecompressedFile{ + Name: f.Name, + Error: ctx.Err(), + } + return + default: + content, err := d.processFile(f) + if err != nil { + resultChan <- repository.DecompressedFile{ + Name: f.Name, + Error: err, + } + continue + } + resultChan <- repository.DecompressedFile{ + Name: f.Name, + Content: content, + } + } } - wg.Wait() - close(resultChan) }() return resultChan, nil }
110-199
:⚠️ Potential issueImprove synchronization in compression logic.
The current implementation has several synchronization issues:
- The deferred
Close()
should be protected by the mutex- The error channel buffer size should match the number of files
- Potential deadlock if error channel is full
Apply these improvements:
func (d *ZipDecompressor) CompressWithContent(ctx context.Context, files map[string]io.Reader) (<-chan repository.CompressResult, error) { resultChan := make(chan repository.CompressResult, 1) + errChan := make(chan error, len(files)) // Buffer to match number of files go func() { defer close(resultChan) buf := new(bytes.Buffer) zipWriter := zip.NewWriter(buf) - defer func(zipWriter *zip.Writer) { - err := zipWriter.Close() - if err != nil { - log.Warn("failed to close zip writer", zap.Error(err)) - } - }(zipWriter) var mu sync.Mutex var wg sync.WaitGroup - errChan := make(chan error, 1) // ... rest of the code ... select { case <-ctx.Done(): resultChan <- repository.CompressResult{Error: ctx.Err()} case err := <-errChan: resultChan <- repository.CompressResult{Error: err} case <-done: + mu.Lock() if err := zipWriter.Close(); err != nil { + mu.Unlock() resultChan <- repository.CompressResult{Error: fmt.Errorf("failed to close zip writer: %w", err)} return } + mu.Unlock() resultChan <- repository.CompressResult{Content: buf.Bytes()} } }() return resultChan, nil }
🧹 Nitpick comments (5)
asset/domain/entity/group.go (1)
31-50
: Kudos on implementing a flexible validation approach
TheValidate
method centralizes your validation logic nicely. While not mandatory, you may consider automatically calling this method (or equivalent checks) upon group creation to ensure immediate data integrity rather than deferring it to later invocations.asset/domain/id/id.go (1)
1-42
: Add documentation comments for clarity
Consider adding doc comments for each type and its corresponding variables to help your team quickly understand their purpose and intended usage.Below is an example:
package id import "github.com/reearth/reearthx/idx" +// idAsset is the type marker indicating the asset entity for ID usage. type idAsset struct{} +// idGroup is the type marker indicating the group entity for ID usage. type idGroup struct{} +// idProject is the type marker indicating the project entity for ID usage. type idProject struct{} +// idWorkspace is the type marker indicating the workspace entity for ID usage. type idWorkspace struct{}asset/infrastructure/decompress/zip.go (3)
88-108
: Consider streaming large files instead of loading them entirely into memory.The current implementation reads the entire file content into memory, which could be problematic for large files. Consider implementing a streaming approach for better memory efficiency.
func (d *ZipDecompressor) processFile(f *zip.File) (io.Reader, error) { rc, err := f.Open() if err != nil { return nil, fmt.Errorf("failed to open file in zip: %w", err) } - defer func(rc io.ReadCloser) { - err := rc.Close() - if err != nil { - log.Warn("failed to close file in zip", zap.Error(err)) - } - }(rc) - - // Read the entire file content into memory - content, err := io.ReadAll(rc) - if err != nil { - return nil, fmt.Errorf("failed to read file content: %w", err) - } - - return bytes.NewReader(content), nil + return io.NopCloser(rc), nil }
216-220
: Consider cross-platform hidden file detection.The current implementation only handles Unix-style hidden files (starting with a dot). Consider adding support for Windows hidden file attributes.
+import "runtime" +import "syscall" +import "path/filepath" func isHiddenFile(name string) bool { + if runtime.GOOS == "windows" { + pointer, err := syscall.UTF16PtrFromString(filepath.Clean(name)) + if err != nil { + return false + } + attributes, err := syscall.GetFileAttributes(pointer) + if err != nil { + return false + } + return attributes&syscall.FILE_ATTRIBUTE_HIDDEN != 0 + } base := filepath.Base(name) return len(base) > 0 && base[0] == '.' }
13-14
: Consider adding debug logs for troubleshooting.While the current error logging is good, adding debug logs would help with troubleshooting:
- File processing start/end
- Compression progress
- Size metrics
Example addition:
func (d *ZipDecompressor) processFile(f *zip.File) (io.Reader, error) { + log.Debug("processing zip file", + zap.String("filename", f.Name), + zap.Int64("size", f.FileInfo().Size())) rc, err := f.Open() // ... rest of the code ... }Also applies to: 96-99, 130-133
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
asset/domain/entity/group.go
(1 hunks)asset/domain/errors.go
(1 hunks)asset/domain/id/id.go
(1 hunks)asset/graphql/schema.resolvers.go
(1 hunks)asset/infrastructure/decompress/zip.go
(1 hunks)asset/infrastructure/gcs/client.go
(1 hunks)asset/infrastructure/gcs/client_test.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- asset/domain/errors.go
- asset/infrastructure/gcs/client.go
- asset/graphql/schema.resolvers.go
- asset/infrastructure/gcs/client_test.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: CI
🔇 Additional comments (8)
asset/domain/entity/group.go (5)
21-29
: Validate thename
field in the constructor
SinceUpdateName
rejects an empty name, it’s consistent to enforce the same constraint when creating a newGroup
to maintain data integrity.Below is a reference snippet from previous reviews, returning an error if the
name
is empty:-func NewGroup(id id.GroupID, name string) *Group { +func NewGroup(id id.GroupID, name string) (*Group, error) { if name == "" { return nil, domain.ErrEmptyGroupName } now := time.Now() return &Group{ id: id, name: name, createdAt: now, updatedAt: now, }, nil }
61-68
: Add length validation inUpdateName
The constructor andValidate
method enforce a maximum of 100 characters forname
, but theUpdateName
method only checks if the name is empty.Below is a reference snippet from previous reviews, incorporating length validation:
func (g *Group) UpdateName(name string) error { if name == "" { return domain.ErrEmptyGroupName } + if len(name) > 100 { + return domain.ErrGroupNameTooLong + } g.name = name g.updatedAt = time.Now() return nil }
70-77
: Enforce length validation inUpdatePolicy
Currently, theUpdatePolicy
method verifies thatpolicy
isn’t empty, but it does not impose a maximum length.Below is a reference snippet from previous reviews, adding length validation:
func (g *Group) UpdatePolicy(policy string) error { if policy == "" { return domain.ErrEmptyPolicy } + if len(policy) > 1000 { + return domain.ErrPolicyTooLong + } g.policy = policy g.updatedAt = time.Now() return nil }
79-83
: Add length validation toUpdateDescription
TheValidate
method enforces a maximum of 500 characters fordescription
, butUpdateDescription
does not perform any length checks.Below is a reference snippet from previous reviews:
func (g *Group) UpdateDescription(description string) error { + if len(description) > 500 { + return domain.ErrDescriptionTooLong + } g.description = description g.updatedAt = time.Now() return nil }
85-88
: No issues with theSetCreatedAt
method
This method is concise and fulfills its purpose. Keep up the clarity here.asset/infrastructure/decompress/zip.go (3)
1-23
: LGTM! Well-structured package setup.Good practices observed:
- Clear package documentation
- Interface compliance check using
var _ repository.Decompressor = (*ZipDecompressor)(nil)
201-214
: LGTM! Well-documented and properly implemented.Good practices observed:
- Clear documentation about thread safety requirements
- Proper error handling with descriptive messages
33-35
: LGTM! Excellent error handling practices.Good practices observed:
- Errors are properly wrapped with descriptive messages using
fmt.Errorf
and%w
- Consistent error handling patterns throughout the code
- Proper error propagation to calling code
Also applies to: 91-93, 155-161, 205-207, 209-211
- Deleted the NewValidationResult function from the validator.go file - Simplified the validation package by removing an unnecessary constructor method - Maintains the existing Valid() method for creating valid validation results
Summary by CodeRabbit
Here are the updated release notes for the pull request:
Infrastructure
Domain
Repository
Usecase
Features
Testing