Upload file to s3 NodeJS

If you know AWS S3, it is regarded as a space where you can put all your store files. That’s what a majority of us would say or know about it. S3 remains one of the older services, which groups like Amazon render before you find the days of revolutionary Lambda seems functioning and then comes the game that seems to be changing the Alexa Skills. You can even store any files from any doc to PDF, and then comes the size of it that can range from 0B to 5TB. As per the official documents, we see AWS S3 be the following:

S3 offers comprehensive security and compliance capacities, which meet the most stringent regulatory requirements. It also offers the customers the features like flexibility and the option to manage data and offer the cost optimization part along with accessing control and compliances. If you put it in simple words, we can AWS S3 to be object-based as per the storage system wherein all the files are stored and saved like an object and not a file. Many higher-end tech wings employ S3 and Dropbox over one after the other. Recently, we have seen Dropbox start saving a massive amount of file data over their services, and these are not saving massive data in S3. To be precise, it is not an expensive option wherein 99 per cent are available. Also, you can find the correct change option to employ services, including Glacier that can help save the data and charge around 0.01 Per GB.

So far, if you have gained your attention, you are always thinking to use the way S3 is going to use the nodeJS applications. Well, you need to for a while. AWS has come up as an official package that exposes the S3 APIs for node and JS apps and then makes things too simple for developers to access S3 from their applications. Now, let us check the steps involved in building a nodejs based application. It can help you in writing any file to AWS S3. Steps involved in upload file to s3 nodejs.

The following are the steps involved for the same:

Set up node app – Being a basic node application, it can help in having two files and package.json and then add the starter film like any app – app.js, index.js, server.js.

You have the option of using your OS’s file manager or your favourite IDE in order to create the project. However, experts recommend CLI. Hence, it is recommended to shell and then follow these commands:

mkdir s3-contacts-upload-demo

cd s3-contacts-upload-demo

Touch index.js .gitignore

npm init

If you did get any error over the above commands, you get the chance to have a folder with the name of s3-contacts-upload-demo, and it has 3 files in it, which include package.json, .gitignore andindex.json. Also, you have the option of using this file in order to add the file in your .gitignore file, hence these are not getting committed to github or through any other version control.

npm init helps in creating a package.json, which carry the details of the project and thus help you in getting the option to enter it over your shell for the default values if you intend to go with.

Install dependencies – In this step, you need to start doing it with the help of installing the NPM package.

npm install –save aws-sdk

With the successful installation of this package, you can help in getting the package.json file checked and it can help in getting you the aws-sdk listed over the “dependencies” field.

This npm package can help in employing to access any AWS service from the nodejs app, and it helps in getting it used with S3.

Import packages- Once you install the same, you need to import the package in your code:

const fs = require(‘fs’);

const AWS = require(‘aws-sdk’);

const s3 = new AWS.S3({

  accessKeyId: process.env.AWS_ACCESS_KEY,

  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY


As you can find, the fs package is imported that is further employed to write file data in this application. We can also help in coming up with the environment variables, for establishing the AWS access along with adding the secret access key. It comes as a bad practice to put them over the version control including Github or SVN. Now, you have the option of getting the S3 instance that can further help in accessing all the buckets over the AWS account.

Pass the bucket information and then jot down the business logic

Here we can find a simple prototype of the way you can upload the file to S3. Here, you can find the Bucket like a name of your bucket and then the key that comes like a name of any subfolder. Hence if your bucket name comes out to be a “test-bucket” and you are keen on save the file in “test-bucket/folder/subfolder/file.csv”, then you end up getting the value of Key should be “older/subfolder/file.csv”.

Note: You can find S3 to be an object based storage and not having a file base. Hence, even when you get the AWS console you can find the nested folders, putting behind the scene that they fail to save the way they want. Every object can have two different fields – Key and Value. Then comes the Key which comes up with a name of file and the Value that remains like a data that is helped in getting stored. So, if a bucket “bucket1” has key “key1/key2/file.mp3”, you can visualize it like this:


  “bucket1”: {

      “key1/key2/file.mp3”: “<mp3-data>”



Below is simple snippe to upload a file,using Key and BucketName.

const params = {

 Bucket: ‘bucket’,

 Key: ‘key’,

 Body: stream


s3.upload(params, function(err, data) {

 console.log(err, data);


File to upload to S3 – First, you need to create a file and then add the contacts.csv followed by writing some amount of data in it.

const fs = require(‘fs’);

const AWS = require(‘aws-sdk’);

const s3 = new AWS.S3({

    accessKeyId: process.env.AWS_ACCESS_KEY,

    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY


const fileName = ‘contacts.csv’;

const uploadFile = () => {

  fs.readFile(fileName, (err, data) => {

     if (err) throw err;

     const params = {

         Bucket: ‘testBucket’, // pass your bucket name

         Key: ‘contacts.csv’, // file will be saved as testBucket/contacts.csv

         Body: JSON.stringify(data, null, 2)


     s3.upload(params, function(s3Err, data) {

         if (s3Err) throw s3Err

         console.log(`File uploaded successfully at ${data.Location}`)





view raws3.js hosted with ❤ by GitHub

S3 upload method returns error and data in callback, where data field contains location, bucket and key of uploaded file. For complete API reference, refer their official docs.

Now run this app by following command:


AWS_SECRET_ACCESS_KEY=<your_secret_key> node index.js

Related Articles

Back to top button

Adblock Detected

Please Disable Adblocker