Client-Side File Upload to Amazon S3 with NodeJS backend

This article describes the upload of a file to Amazon S3 purely on the client. This means the server (NodeJS) in this case never gets to see / has to handle the actual file the user is uploading.

To upload files from within NodeJS the npm (Npm) module Knox can be used.

Node.js 0.4.9 or higher is required as a runtime environment for our example.

The following principle applies to the application design: A client which wants to upload files, directly posts them to the Amazon servers, so the files do not end up on our application server and will not be processed by it. Thus, the load and the traffic on our own server stays moderate while the upload is maintained at the best possible speed.

In order to upload data to S3, a client must have a valid policy, which allows him to post a file there. A policy is used for a single request and is not permanently valid. To create a valid policy you need to know the individual access keys of the bucket which will be used for uploading the data. Because of this, client side generation of the policy is no option because the AWS keys would be exposed.

The structure of such a policy as well as the html form for a S3 POST request (for file upload) is documented here and here. First of all, you will need a file upload form containing the input filed for the file, as well as all the other fields described in the documentation.

<form action="http://somebucket.amazonaws.com/" method="post" enctype="multipart/form-data" id="myform">
	<input type="file" name="file" id="file">
	<input type="hidden" name="policy">
	[...]
	<input type="submit" name="submit" id="btn_submit">
</form>

If the user choses a file to upload and submits the form, we will intercept the click on the submit button in order to request a valid policy from our server (code example needs jQuery) prior to posting the file to S3:

var _requestBucket;
$( "#btn_submit" ).bind( "click", requestCredentials );

requestCredentials = function(event) {
  event.preventDefault();
  var _file;
  
  _file = $("#file").val().replace(/.+[\\\/]/, "");

  $.ajax({
    url: "/gets3credentials/" + _file,
    dataType: "JSONP",
    success: processResponse,
    error: function(res, status, error) {//do some error handling here}
  });
}

This form submission is intercepted via event.preventDefault () which will prevent form submission at this time. We then extract the file name: _file = $("#file").val().replace(/.+[\\\/]/, "");. Now we place an ajax call requesting the policy and signature for our file on a custom endpoint called /gets3credentials/[file] on our node.js server which can be defined using the connect or express framework for example.

This method has to create a valid policy and signature and return these to us so we can set the appropriate data in our form before submitting it to S3.

crypto = require( "crypto" );
mime = require( "mime-magic" );

var createS3Policy;
var s3Signature;
var s3Credentials;

createS3Policy = function( mimetype, callback ) {
  var s3PolicyBase64, _date, _s3Policy;
  _date = new Date();
  s3Policy = {
    "expiration": "" + (_date.getFullYear()) + "-" + (_date.getMonth() + 1) + "-" + (_date.getDate()) + "T" + (_date.getHours() + 1) + ":" + (_date.getMinutes()) + ":" + (_date.getSeconds()) + "Z",
    "conditions": [
      { "bucket": "bucketName" }, 
      ["starts-with", "$Content-Disposition", ""], 
      ["starts-with", "$key", "someFilePrefix_"], 
      { "acl": "public-read" }, 
      { "success_action_redirect": "http://example.com/uploadsuccess" }, 
      ["content-length-range", 0, 2147483648], 
      ["eq", "$Content-Type", mimetype]
    ]
  };
  
s3Credentials = {
	s3PolicyBase64: new Buffer( JSON.stringify( s3Policy ) ).toString( 'base64' ),
	s3Signature: crypto.createHmac( "sha1", "yourAWSsecretkey" ).update( s3Policy ).digest( "base64" ),
	s3Key: "Your AWS Key",
	s3Redirect: "http://example.com/uploadsuccess",
	s3Policy: s3Policy
}
  
  callback( s3Credentials );
};

This function creates an object called s3Policy which contains the following keys:

  • expiration: Determines the validity of the policy and designates, so how long will it be possible to use this policy for an upload to S3. The validity period is a time stamp in the format yyyy-m-dThh:mm:ssZ, for example 2012-1-31T11:00:00Z. The period should be selected at a reasonable level, so that even an upload of large files is possible without this period exceeding. In our example, the period is set to one hour (_date.getHours () + 1).
  • conditions: An array of parameters for upload to S3, which are listed separately below.
  • bucket: The unique name of the S3 bucket used for uploading
  • ["starts-with", "$Content-Disposition", ""]: This key can be used to deliver the file as an attachement forcing any client to show the ‘save as’ dialog when requesting the file for example. This could be achieved using the value attachment
  • ["starts-with", "$key", "someFilePrefix_"]: Ensures that the key begins with someFilePrefix_ . Thus, a particular pattern for the file names will be enforced.
  • { “acl”: “public-read” }: defines file security. It provides customers with the options public-read and authenticated-read
  • { “success_action_redirect”: “http://example.com/uploadsuccess" }: This is the URL the client will be redirected to, if the upload was successful. This can be used, among other things, to start processes that handle the uploaded file. (image scaling for example). The forwarding is done client side, forwarding the browser of the client to this URL. This may also be used to provide a user friendly message when the upload is complete.
  • ["Content-length-range", 0, 2147483648]: This is limited in terms of size of the upload in bytes.
  • ["content-length-range", 0, 2147483648]: Specifies the content type set, which is stored on S3 for the file. The content type of a file can be determined on node.js using mime-magic for example. Since the file isn’t uploaded to our server at this point though, we have to guess the mime type using the file extension, or we have to analyze the file after it was uploaded to S3 by downloading it again and processing it on our server.

After collecting the appropriate data, the policy is stored in a Base64 encoded variable. This is done by calling

s3PolicyBase64 = new Buffer( JSON.stringify( s3Policy ) ).toString( 'base64' );

We then generate the signature for our POST request, which is done using:

s3Signature = crypto.createHmac( "sha1", "yourAWSsecretkey" ).update( s3Policy ).digest( "base64" );

Our s3Credentials object also contains another key called s3Key - this includes the applicable AWS Access Key (not the Secret Key!), which is needed by the form during the POST request. In addition, we also provide the redirect again as an own key in the object which is just convenience for setting the value to the according form field on the client. (The value is otherwise available only in the Base64 encoded Policy).

The entire s3Credentials object will be passed to the ajax callback function on the client, which then puts the appropriate values ​​in the corresponding fields of the form and submits the form.

function processResponse( res ) {
...
        $("#fld_redirect").val(res.S3Redirect);
        $("#fld_AWSAccessKeyId").val(res.s3Key);
        $("#fld_Policy").val(res.s3PolicyBase64);
        $("#fld_Signature").val(res.s3Signature);
		[...]
		$("#myform").submit(); 
        
}

Thats about it, hopefully this works for you. This is of course only a very basic example but should do for starters.

share