-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: allow for server uploads #993
Conversation
describe("assets upload", () => { | ||
it("should be able to upload a file to the minio bucket from the server", async () => { | ||
const { uploadFileToS3 } = await import("./assets"); | ||
|
||
const file = await fs.readFile(new URL("./assets.ts", import.meta.url)); | ||
|
||
const url = await uploadFileToS3("test", "test.ts", file, { | ||
contentType: "text/plain", | ||
}); | ||
|
||
expect(url).toBeDefined(); | ||
|
||
const text = await fetch(url).then((res) => res.text()); | ||
|
||
expect(text).toEqual(file.toString("utf8")); | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need more tests of eg larger files?
const parallelUploads3 = new Upload({ | ||
client, | ||
params: { | ||
Bucket: bucket, | ||
Key: key, | ||
Body: fileData, | ||
ContentType: contentType, | ||
}, | ||
queueSize: queueSize ?? 3, // optional concurrency configuration | ||
partSize: partSize ?? 1024 * 1024 * 5, // optional size of each part, in bytes, at least 5MB | ||
leavePartsOnError: false, // optional manually handle dropped parts | ||
}); | ||
|
||
parallelUploads3.on( | ||
"httpUploadProgress", | ||
progressCallback ?? | ||
((progress) => { | ||
logger.debug(progress); | ||
}) | ||
); | ||
|
||
const result = await parallelUploads3.done(); | ||
|
||
return result.Location!; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
basically the same as our legacy implementation
minus the ACL. I'm not sure what it is and why it would be necessary. we always seem to set it to public-read
in legacy, but do make uploads with it? eg here
https://github.com/pubpub/pubpub/blob/2dfe92a70d647278e512a91dfb060e4480739df2/server/utils/s3.ts#L93-L121
upload seemed to work fine without it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm it's happening for me on main too.... is it only me? |
@allisonking it seems to work for me! do you have any overrides in |
ok don't hold this PR up for me then! I don't think I have any overrides in |
Issue(s) Resolved
Not being able to upload files from the server
High-level Explanation of PR
While we have been able to upload files from the frontend for some time, uploading them from the server was not possible, because we were not exposing the S3 client we use for the frontend anywhere else.
This PR exposes that client, and more importantly exposes a convenience function
uploadFileToS3
which uses@aws-sdk/lib-storage
for easy multipart file uploads, making file uploads much quicker.Test Plan
Screenshots (if applicable)
Notes