How to Upload Node Zip to S3
Zip files on S3 with AWS Lambda and Node
Information technology's non an uncommon requirement to want to package files on S3 into a Aught file for a user to download multiple files in a single packet. Maybe information technology'south mutual enough for AWS to offer this functionality themselves one mean solar day. Until and so you can write a short script to do information technology.
If you want to provide this service in a serverless surround such equally AWS Lambda you take two main constraints that define the approach you lot can have.
one - /tmp is only 512Mb. Your first idea might be to download the files from S3, zip them up, upload the effect. This volition piece of work fine until you lot make full up /tmp with the temporary files!
2 - Memory is constrained to 3GB. You could shop the temporary files on the heap, simply once more you are constrained to 3GB. Even in a regular server surroundings you lot're not going to want a unproblematic zip role to take 3GB of RAM!
Then what can y'all practise? The answer is to stream the data from S3, through an archiver and back onto S3.
Fortunately this Stack Overflow mail service and its comments pointed the way and this post is basically a rehash of it!
The below code is Typescript simply the Javascript is simply the aforementioned with the types removed.
Start with the imports yous need
import * as Archiver from ' archiver ' ; import * as AWS from ' aws-sdk ' ; import { createReadStream } from ' fs ' ; import { Readable , Stream } from ' stream ' ;
Permit's start by creating the streams to fetch the data from S3. Allow's assume y'all take a list of keys in keys
. For each key we need to create a ReadStream. To track the keys and streams lets create a S3DownloadStreamDetails type. The 'filename' will ultimately be the filename in the Zip, so you tin do any transformation you demand for that at this stage.
type S3DownloadStreamDetails = { stream : Readable ; filename : string };
Now for our assortment of keys, we can iterate after it to create the S3StreamDetails objects
const s3DownloadStreams : S3DownloadStreamDetails [] = keys . map (( primal : string ) => { return { stream : s3 . getObject ({ Bucket : ' Saucepan Proper noun ' , Fundamental : key }). createReadStream (), filename : fundamental , }; });
Now set the upload side by creating a Stream.PassThrough
object and assigning that as the Trunk of the params for a S3.PutObjectRequest
.
const streamPassThrough = new Stream . PassThrough (); const params : AWS . S3 . PutObjectRequest = { ACL : ' private ' , Body : streamPassThrough Bucket : ' Bucket Name ' , ContentType : ' application/zip ' , Key : ' The Primal on S3 ' , StorageClass : ' STANDARD_IA ' , // Or as appropriate };
Now we can kickoff the upload procedure.
const s3Upload = s3 . upload ( params , ( mistake : Error ): void => { if ( error ) { console . error ( `Got mistake creating stream to s3 ${ fault . name } ${ error . bulletin } ${ error . stack } ` ); throw error ; } });
If you lot want to monitor the upload process, for case to give feedback to users so yous can attach a handler to httpUploadProgress
like this.
s3Upload . on ( ' httpUploadProgress ' , ( progress : { loaded : number ; total : number ; office : number ; key : string }): void => { console . log ( progress ); // { loaded: 4915, total: 192915, role: 1, key: 'foo.jpg' } });
Now create the archiver
const archive = Archiver ( ' zip ' ); archive . on ( ' mistake ' , ( error : Archiver . ArchiverError ) => { throw new Fault ( ` ${ error . name } ${ error . code } ${ fault . message } ${ error . path } ${ error . stack } ` ); });
Now we can connect the archiver to piping information to the upload stream and append all the download streams to it
await new Hope (( resolve , reject ) => { console . log ( ' Starting upload ' ); s3Upload . on ( ' close ' , resolve ); s3Upload . on ( ' end ' , resolve ); s3Upload . on ( ' fault ' , decline ); archive . pipe ( s3StreamUpload ); s3DownloadStreams . forEach (( streamDetails : S3DownloadStreamDetails ) => archive . append ( streamDetails . stream , { name : streamDetails . filename })); archive . finalize (); }). catch (( mistake : { code : string ; message : string ; data : cord }) => { throw new Fault ( ` ${ error . code } ${ error . message } ${ mistake . data } ` ); });
Finally expect for the uploader to end
await s3Upload . promise ();
and yous're done.
I've tested this with +10GB archives and it works like a charm. I promise this has helped you out.
Source: https://dev.to/lineup-ninja/zip-files-on-s3-with-aws-lambda-and-node-1nm1
0 Response to "How to Upload Node Zip to S3"
Post a Comment