File uploads have been a very intricate part of any web application. Files of various types, sizes are uploaded to a server. Sometimes our files could be really large which could take substantial time to upload. Maybe a certain icing on the cake could help if we provide our users the facility of resumable uploads. The user might decide to pause the upload if he thinks that it might take considerable time and then could resume later. This will allow the user to operate at his discretion than being limited by the application’s inflexibility.


The HTML5 File API has brought about considerable changes to the way files etc. are processed and sent over to the server. It gives a lot of power to the client to process selected files and leeway to decide the format the client needs etc. Although, the API will not solve our problem straight away but will give us the necessary tools to do so. Before we delve into solving the problem at hand, a brief introduction to the API is imperative.

The API provides couple of interfaces to access files from the local file-system:

  1. File – a single file on the file-system. This object will allow access to a set of read-only attributes like name, size, mime type etc.
  2. Blob – Blob stands for Binary Large Object, and is a file-like object of immutable raw data. It’s mostly used while slicing files into chunks, generating blob objects.
  3. FileReader – the interface which allows asynchronous file read from the file-system.


For our sample problem of uploading a single file and making the upload resumable, the above mentioned interfaces are sufficient.

File support in browser

We could check for the support:

if(window.File && window.Blob && window.FileReader){

// good to go. File API is supported


Now we need to select a file, read it asynchronously and then try uploading it to a server. The file object we get using the File interface is a reference to the actual file on the filesystem. The FileReader object reads the contents of the file object and once it’s done reading, the onload event of the FileReader object is triggered. Now, we have the contents and should proceed with the upload. The FileReader has several ways of reading a file asynchronously:

  • readAsText(blob, encoding)used to read text files
  • readAsDataUrl(blob)used to read a blob and create a data URL out of it. What is a data URL? Well it’s a URL but it’s composed of base64 encoded string of the original data. For example, lets say we want to include an external image as background:



background: url(url_to_external_image);


This could be done (in a better/smarter way) using data URLs.


background: url(data:image/gif;base64,nhlkjdsljfsfiruuuuuuRYEEDDHAODSALKDNWE987794574987598930293KHKANHLKWKL$$SF34);


The value provided in the url attribute is the data URL. This saves considerable number of HTTP requests to fetch external resources and hence increases performance. Once the read operation is complete, the onload event on the reader object is triggered which gives access to the data read.

  • readAsBinaryString(file)it returns the raw binary of the file and is used to read any type of file.
  • readAsArrayBuffer(blob|file)it reads a blob or buffer and produces an ArrayBuffer which is a binary data buffer.

The reader could be aborted at any moment in time by using the abort() method. This will stop the ongoing read operation. We could use readAsDataUrl() or readAsBinaryString() for this article. However the advantage readAsDataUrl() provides is that the data URL could be just put as the src attribute for e.g. setting up thumbnails for uploaded files (if they are images) or for direct downloads as well. So, we will stick with readAsDataUrl(). Further, we will be using the localStorage object to store and remember which file is being uploaded currently. This will serve as a context which could be used during the resume state (will be evident as we move along). Also, we will be using the HTML5 FormData() API to communicate/send data to the server.

Our HTML and corresponding JavaScript could be set up as follows:


<input type=‘file’ name=‘myFile’ id=‘uploadFile’ />



<td><button id=‘pause’>pause</button></td>

<td><button id=‘resume’>resume</button></td>




/** helper function to create an Ajax request */

function initiateXHR(object, method, url, mode, fileDataURL, fileName, headersObject){

//create the object

object = null;

object = new XMLHttpRequest();, url, true);

//append the data

var formData = new FormData();

formData.append(‘file’, fileDataURL);

formData.append(‘mode’, mode);

//add event listener for the xhr object


//add request headers if any

for(var header in headersObject)

object.setRequestHeader(header, headersObject[header]);



/** helper function to add listeners to the Ajax request */

function addListener(object){

object.onload = function(){

if(object.readState == 4){

if(object.status == 200){

if(object.response === “FILE_UPLOAD_SUCCESSFUL” || object.response === “FILE_APPEND_SUCCESSFUL”){

/** note that once the upload completes, make sure to remove

*  the localStorage entry.







object.onerror = function(){

// xhr error



var xhr = null, _xhr2 = null, _reader = null;

/** file input change handler */

document.getElementById(‘uploadFile’).addEventListener(‘change’, function(e){

// some file was selected. Lets get the file and read it.

var file = this.files[0]; // this is the file reference to the file on the filesystem

var fileSize = file.size;

var fileName =;

_reader = new FileReader();


_reader.onload = function(event){

var data =; //this contains the read content

// store the file into the localStorage object for future use

localStorage.setItem(‘currentFile’, JSON.stringify({

‘fileReference’ : file,

‘fileSize’      : fileSize,

‘fileName’      : fileName


/** initiate an XHR request to upload to server */

initiateXHR(xhr, “POST”, “uploadFileToServer.php”, “upload”, data, fileName, { ‘X-FileName’ : fileName });


}, false);

The basic setup above will read the selected file and upload to the server.

Note that this might not seem to be the ideal way of file uploads. (Direct) File uploads are usually done via a form submit using the multipart/form-data encoding. We are however using AJAX to upload the file.

Now we need to add the pause/resume functionality. The process flow could be something like:

  1. Uninterrupted file upload in progress. A note of the file being uploaded is made.
  2. The user presses the pause button.
  3. The ongoing xhr operation is aborted.
  4. The user presses the resume button.
  5. Get the stored reference to the file to resume uploading.
  6. Find how many bytes have been transferred to the server and then slice the file by that many bytes
    from the beginning, take the remaining and start uploading.

Lets begin with the pause part. This part is easy.

/** pause button click event handler */

document.getElementById(‘pause’).addEventListener(‘click’, function(e){



Now comes the fun part. Implementing the resume part is tough because we need to know how many bytes have been transferred to the server. This could be known by several (incorrect) ways. For e.g. we could add progress event handlers to the xhr object to monitor the progress of the upload. However, that value could be erroneous as it might have sent out a couple of bytes/bits which didn’t reach the server and the abort operation was called in. The safest way to get an idea of the number of bytes transferred is to ask the server itself. So, we raise another XHR request to the server asking how many bytes of this particular file has been uploaded. Then we slice the file and continue uploading the remaining.

/** resume button click event handler */

document.getElementById(‘resume’).addEventListener(‘click’, function(e){

_xhr2 = new XMLHttpRequest();“GET”, “getBytesFromServer.php”, true);

//retrieve the entry for the stored file reference

var myFileObject = JSON.parse(localStorage.getItem(‘currentFile’));

var fileReference = myFileObject.fileReference;

var fileSize = myFileObject.fileSize;

var fileName   = myFileObject.fileName;


_xhr2.onload = function(){

if(_xhr2.readyState == 4){

if(_xhr2.status == 200){

var response = JSON.parse(_xhr2.response);


* Let’s assume that our response from server is an object:

*                {

*                       fileName: myAwesomeFile,

*                       bytesUploaded : 2440

*                }


var bytesSent = response.bytesUploaded;

var fileBlob = fileReference.slice(bytesSent, fileSize); // crux of this article

_reader = null;

_reader = new FileReader();


_reader.onload = function(){

var remainingFile = _reader.result;

/** upload this remaining file to the server now */

initiateXHR(xhr, “POST”, “uploadFileToServer.php”, “resume”, remainingFile, fileName, { ‘X-FileName’ : fileName });




}, false);

Server Side

Sample implementation for uploadFileToServer.php :


$response = “”;


$allHeaders = apache_request_headers();

$fileName = $allHeaders[‘X-FileName’];

$fileContent = $_POST[‘file’];

// fileContent will contain the data URL which contains data in the format:

// data:<MIMETYPE>;base64,<base64_encoded_file_content>

$out = explode(“base64,”, $fileContent);

$fileData = base64_decode($out[1], TRUE);


$mode = $_POST[‘mode’];

if($mode == “resume”){

// this is the resume mode. Append to the file instead of overwriting it.


file_put_contents($fileName, $fileData, FILE_APPEND);



$response = “file not found !”;



else if($mode == “upload”){

// this is the normal uninterrupted upload mode

file_put_contents($fileName, $fileData);





echo $response;


Sample implementation for getBytesFromServer.php :



$fileName = $_POST[‘fileName’];

$fileSize = 0;

$response = array(

‘fileName’ => $fileName,

‘bytesUploaded’ => $fileSize


if($fileId != NULL){


$fileSize = filesize($fileName); // filesize() is a built-in function which returns the size of the file in bytes


catch(Exception $e){

$fileSize = -1;



$response[‘bytesUploaded’] = $fileSize;

echo json_encode($response, TRUE);




That’s it. We’re done. Note that the XHR dealing with uploading the remaining file would append the contents to the partially uploaded file rather than creating a new one. Further, I’ve used the localStorage object of the WebStorage API to store the data URL for the file content. Note that this would be fine if the file not really that large. Refer to this link for further details about the WebStorage API. For humongous files, it’s better to switch to using IndexedDb. It supports a full fledged database support for storing large data.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply