Skip to Content
Author's profile photo Former Member

SAP Biddo was an amazing affair…

Introduction

I was involved in the development of an internal app, SAP BIDDO, an online bidding and shopping site. It’s an small-average sized app targeted at around 20,000+ live users. Now we didn’t have much time to devote and the app needed to be up soon. Not many major architectural decisions could be imbued into the application since we were just two of us. But still we tried to make it as robust as possible. They say:

If you want to go fast, then go alone…

But if you want to go far, then go together…

We were definitely in the former segment and in this blog, I will share my development experience.

The Stack

Now, our app was going to be I/O heavy, i.e. reading about the shopping/bidding items; bidding for or buying thems. So, we decided to go for Node.js on the server. As we started to prepare our schema, we realized that we didn’t have much aggregations to perform on our data to get what was needed. So, storing all relevant data in one assorted data record in MongoDb served perfectly for us. Redis was chosen for persistent session store and caching items.

I am well versed with React.js (a front-end JavaScript framework by Facebook) and so we glued React.js with Bootstrap for the front-end. So our tech stack looked like this:

/wp-content/uploads/2014/12/biddo2_597490.png

And this is what they were going to do:

/wp-content/uploads/2014/12/biddo_597497.png

Development

1. Front-End

React is a front-end JavaScript framework by Facebook. It’s not particularly an MVC per se, but rather is just the V (the View). It is a very useful tool for creating web components which could be re-used across an application. Creating components using React is very easy which is again assisted by React’s very comprehensible documentation. React is build upon the concept of forgoing multiple changes to the DOM and actually determining the minimum changes that need to be made to the DOM to reach from state A to state B. It thrives on the concept of a Virtual DOM and is like running the Levenshtein distance algorithm on a DOM tree to compute the minimum number of changes needed to change the DOM state (but in a clever way).

As such to compare two arbitrary trees, A and B, and compute the minimum changes required to convert tree A to tree B constitutes O(n3lg(n)) order. However, React uses its diff algorithm and heuristics to compute the approximated difference in O(n) time. Check out this awesome article to know more about the algorithm.

React’s philosophy is to support a one-way data binding and that data flows from the parent component to the child component. It also makes effort (due to its meticulous design) to reduce XSS attacks on the client by replacing all string based inline content with Objects. So much for a framework. Finally, we ended up with 15~20 detailed components for our app.

React provides an interface for writing re-usable code called Mixins. So we could write a piece of code as a mixin and then use it across components for e.g. a mixin for raising AJAX requests.

Keeping our data up-to-date

For our bidding section, since the item’s minimum bid value needed to be updated frequently based on others bids, we used Server Sent Events (SSE) for keeping the bid price updated for all clients across. SSE provide a sublime and easy way to send data from a server to a client. The client might not poll for the latest data but, rather register itself with the server using the EventSource API. Once the connection is established, the server can now send data to (all) registered clients. Please refer this article for more information on SSE. This solution worked for us as we didn’t require a full duplex connection to facilitate our requirement like in the case of WebSockets.

Performance

We needed to assess the performance of the app with respect to resource load times as our app server shot considerable number of images, CSS and JavaScript files. For this, we resorted to Resource Timing API.

Resource Timing API

This is a browser API that lets you gauge the load times of resources loaded onto a page. This API is accessible via the window.performance interface. Now, to gather info about resources on the page, we can use an important method:

var resources = window.performance.getEntriesByType(‘resource’);

This method returns an array of PerformanceResourceTiming objects. Each PerformanceResourceTiming object contains information such as:


restim.PNG

Note that all values represent milliseconds. These properties could be best explained via the various phases of the request/response shown below:


restim2.PNG

image courtesy: perfplanet

So, now one could access the information for each resource that’s loaded onto a page, send over to the server and try to comprehend if any of these resources is creating a performance bottleneck.

Resource Timing API support is supported by Chrome, IE10+ and can be manually enabled in Firefox (by enabling the dom.enable_resource_timing flag).

webp.PNG


2. Server

          With Node being our platform, we went on with the regular paradigm and chose ExpressJS to be our server side           framework. Now, the major development went on in this layer.

  1. Respawn– It might be possible that some uncaught exception occurring at the server could lead to the server exiting. In order to make our node server run continuously i.e. automatically restart after an exception, we used respawn CLI. Other tools like forever etc. also could be used. Choosing respawn in our case is just a matter of personal interest. Using respawn provides events which we can listen to and act accordingly:

var respawn = require(‘respawn’);

var proc = respawn([‘node’, ‘myApp.js’], {

    env: {ENV_VAR:’test’},

    cwd: ‘.’,

    maxRestarts:10,

    sleep:1000,

});

proc.on(‘spawn’, function () {

  console.info(‘>> application monitor started…’);

});

proc.on(‘exit’, function (code, signal) {

  console.error(‘>> process exited, code: ‘ + code + ‘ signal: ‘ + signal);

});

proc.on(‘stdout’, function (data) {

  console.log(data.toString());

});

proc.on(‘stderr’, function (data) {

  console.error(‘>> process error ‘+ data.toString());

});

proc.start();

  1. SSO – We were running an HTTPS server supporting SSO using client side authentication. It’s quite easy to support client side authentication if the server side ran on Node. For e.g. our sample HTTPS server looked like:

var SSLOpt = {

          key: fs.readFileSync(“/certificates/privatekey.pem”, ‘ascii’).toString(),

          cert: fs.readFileSync(“/certificates/certificate.pem”, ‘ascii’).toString(),

ca: fs.readFileSync(“/certificates/ca.crt”, ‘ascii’).toString(),

          /* By requesting the client provide a certificate, we are essentially authenticating the user */

          requestCert: true,

          /* If specified as “true”, no unauthenticated traffic will make it to the route specified */

          rejectUnauthorized: false

};

var server = https.createServer(SSLOpt, app);

server.listen(app.get(‘port’), function(){

  console.log(“Express server listening on port ” + app.get(‘port’));

});

So we provide the server side SSL certificate, the key and the signing certificate used. The parameter “requestCert” set to TRUE, ensures that the client presents a certificate to authenticate herself. The parameter “rejectUnauthorized” stands self-explanatory. Now, once the server starts running and we access the app in the browser, we’re presented with a certificate selection popup in the browser. Upon selecting any certificate, it’s very easy to retrieve, on the server,  the certificate selected.

//retrieve the client selected certificate

var certificate = request.connection.getPeerCertificate();

The variable certificate contains all information about the client certificate in a JSON object. Now we could try verifying/retrieving data about the client like User Id, User Email etc.

return certificate.issuer.CN === “MyIssuer”




  • Security – The following directives were taken to ensure that app provided a secure interface:

                         HTTP headers

      1. X-XSS-Protection – Most modern browsers have a built-in reflexive Cross Site Scripting (XSS) protection. This can be triggered by setting this header from the server. Values for the header include:

X-XSS-Protection: 0 | 1 | 1;mode=block | 1;report=SOME_URL

                    Where 0 disables the protection system and 1 enables it. When set to 1 with mode=block, it                     means the response needs to be blocked in the light of the fact that some unsanitized script has                     been injected. When set to 1 with report=SOME_URL mode, then all potential XSS attacks are                     reported to the URL provided (all data is POSTed to the URL).

        2. X-Content-Type-Options – Browsers interpret different resources sent by the server differently.                     For e.g. a text file is interpreted differently than images. What a file should be interpreted as, is                     determined by its Content-Type which is sent over as a response HTTP header by the server.                     However, sometimes, servers send incorrect content-types following which, browsers were                     upgraded to determine (or sniff) the possible content-type of the resource. This process is called                     MIME sniffing. This is done via extracting the first 256 bytes of the file and deciphering it.                     However this process leads to potential XSS attacks and thus its use is discouraged. Setting the                     header as:

X-Content-Type-Options : nosniff

                    prevents the browser from using its own brain in predicting the most appropriate MIME type for                     the resource and run according to the Content-Type set by the server. Note that Firefox doesn’t                     support this header.


        3. X-Powered-ByIf we have ever try to notice the server response HTTP headers, then we might                     have noticed the X-Powered-By HTTP header. For Express, the value of this response Header is:

X-Powered-By : express

                    This header basically lets you know about the server/technologies, the app is running on.                     Although it acts only to inform the client about the server and poses no critical threat, it’s still                     quite imperative that we remove this header from the response. Why? As it provides legitimate                     information about the server the app is running on like Express or ASP or PHP etc. attackers                     could use this information to their advantage to concoct platform specific vulnerabilities.

                        

                         CSRF mitigation

     Cross Site Request Forgery (CSRF) as per OWASP is an attack which forces an end user to execute      unwanted actions on a web application in which he/she is currently authenticated. It inherits the identity      and privileges of the victim to perform an undesired function on the victim’s behalf, like change the victim’s      e-mail address, home address, or password, or purchase something.

     The common way we protect our app against such an attack is to issue a CSRF token to a form and then verify      it later when the form is submitted. If the token issued matches the one we have with us as reference, we allow      the action to continue. So, the flow is like this:

  1. We generate a CSRF token and store it in our session.
  2. We issue this token to the client via cookies or hidden <input> field types. In our case, we sent via cookies.
  3. The client grabs this cookie and extracts the token.
  4. Just before the client raising a POST request, it appends this token in a custom header X-CSRF-Token that the server extracts.
  5. The server compares the header value against the token stored in the session to either allow or disallow the action.

     ExpressJS provides middleware to generate such tokens and store them in the session.

app.use(express.csrf());

     This middleware automatically generates the CSRF token and appends it to the session which could be      accessed using  req.session._csrf. It is also imperative to keep in mind that our CSRF tokens do not need      to be verified for GET/HEAD/OPTIONS requests.

              

                    XSS mitigation

Although the app didn’t have hooks wherein user input was catered, still all inputs were sanitized where ever possible. Sanitization was performed server side as well to ensure non-browser requests also passed through the purification process. Here’s another fun fact about ReactJS. React does allow inline styling however it does so not via simple strings but rather through objects which prevents any potential XSS attack.

var mystyleobj = {

color: “#f1f1f1”,

background: “#55555”

}

<div style={mystyleobj} >the contents go here</div>

So, this has helped mitigate quite of XSS on the client as such. For further information on XSS mitigation, please go through this article.


                    HTTP Parameter Pollution(HPP)

This is one vulnerability which is (in my opinion) a implementation flaw and is quite overlooked. Imagine we’ve an express route that takes in a list of product ids and is expected some details/information about the products. So our route looks like this:

app.get(‘/getdetails’, function(req, res){

var productIds = req.query.pid;

//we’ve received the pids and now we query and return the details

});

So, for a request like localhost:3000/getdetails?pid=123&pid=124&pid=783

The productIds variable is populated as an array: [123, 124, 783]. This is all fine and satisfactory. But lets say that we’re implementing an autocomplete. So we want to search a user by her name. Our route could look like:

app.get(‘/find’, function(req, res){

var user = req.query.name;

var type = req.query.type;

if(type === “username”){

//proceed with the query….

}

});

For a sample request

localhost:3000/find?name=abinash&type=username

We get the values for user as “abinash” and type as “username”. But what happens if we concoct a malicious request like:

localhost:3000/find?name=abinash&type=username&name=ravi

Then the user variable will be an array instead of a string and would contain values [“abinash”, “ravi”] and then obviously the following query would either fail or wouldn’t return expected results.


                    Clickjacking

This is a severe vulnerability wherein an attacker tries to make you (a logged in user) perform unintended action by obscuring the fact that such an action could be performed. For e.g. lets say you were on malicious site, oblivious to you, and you find a button saying “click for free anti-virus scan”. Now, the attacker could now take a legitimate site, load it via an invisible iframe that has a button which does a form submit, into his site and that the submit button and the “free anti-virus scan” button are perfectly aligned. Now, you might click on the latter button but eventually it ends up submitting a form which might perform actions like logging you out or performing some action on a social networking site etc.

So all this problem occurred because of the site being able to be loaded via an iframe. If we could disable that then our problem could be mitigated. A very useful HTTP header X-Frame-Options comes into play here which can prevent/allow loading a site via an iframe. Possible values for this header are:

X-Frame-Options: DENY | ALLOW-FROM-uri | SAMEORIGIN

The server could respond back to the client with this header set to apprise its operability in an iframe.


  •           Data Compression

One very cardinal requirement to achieve is to reduce the overall memory footprint of the app. Although there are several aspects to turning this into fruition, we are however going to discuss about compressing data before sending back to the client. Browsers send a request header informing what compression algorithms they support so that they can de-compress the compressed data when received. This information is found in the Accept-Encoding HTTP header. Sample value for this header could look like:

Accept-Encoding: gzip, deflate, sdch

Now that we’ve this information on the server, we could compress our data according to the algorithm that the user-agent supports. For this, we could use Node’s zlib module. It provides GZIP, DEFLATE and DEFLATERAW support. We could pipe our response to a gzip/deflate object and then send the compressed data to the user-agent. However, coincidentally it turns out that most modern day browsers do support GZIP compression. Express provides middleware which can handle this data compression automatically for us.

app.use(express.compress());

Yes, just including this middleware automatically compresses our data using GZIP. This can be tested via observing the HTTP response header Content-Encoding. Note that for Express 4.x we have to manually import and include the compression module and then use:

var compress = require(‘compression’);

app.use(compress());

  • Caching and Session Storage

Caching

Data caching is yet another aspect of a data intensive application. We didn’t have humongous data to be cached but still our considerably large items list definitely demanded to be cached. We had chosen Redis to be our cache store. Although we may all be well versed with Redis, but just for the information of all readers, Redis is a biggggggg data-structure server. It stores data structures in-memory. Now, the most common usage is a <key,value> store for Redis. However, because of the large number of items, we decided to leverage another important redis data structure for the <value> – the Linked list.

Redis provides two list data structures namely:

      • linked list – classic doubly-linked list and thus contains <prev, next, data> per item in the list. Note that for smaller lists or not so large data sets, it might turn out that our meta data is much larger than our actual data. So choosing a list should be made prudently.
      • ziplist  – encoded dually linked lists mostly suited to smaller data sets and is very memory efficient. It is used for storing variable-length integers and strings.

For e.g. using node redis client, data is an array of objects:

var clientMulti = redisClient.multi();       

for(var i=0;i<data.length;i++)

         clientMulti.rpush(key, JSON.stringify(data[i]));

clientMulti.exec(function(err, res){

        callback(err, res);

});

  

If interested watch this extremely interesting video on how Twitter uses Redis for caching.

Session Storage

Express provides two ways for implementing sessions:

  1. Cookie based – a cookie is sent, along with the server state, from the server to the client and then sent back to the server to regenerate the session state. However this works well with stateless applications and with simple (or smaller) session data.
  2. Session store – this approach relies on storing the session data in a file system, database etc. which allows to store substantial session information as compared to cookie store. For a pair, <session_key, session_data>, a cookie containing session_key is sent to the client (to identify the client) while the pair is stored on the server. Later this cookie is sent back to the server which takes the session_key and retrieves the corresponding user session_data. Now, server side implementations in PHP use file systems to store session data while others store it in volatile memory i.e. all sessions get flushed in case our server is shut/restarted. Express provides external session stores which allow you to have persistent session data. We used the RedisStore to store our session data on a separate redis server. This way our session is stored in-memory which assists fast access.

app.use(express.session({

   store: new RedisStore,

   secret: ‘somebigggggsecret’

}));

Further this would also allow to share sessions in case we host our application over different servers say one server built on Node and other in PHP. So, one could re-use the session already created by one server and stored in Redis.

3. Database

What better associate for Mongodb could be than Mongoose ODM. The biggest advantages of using Mongoose are:

  1. Designing schemas and their corresponding models
  2. Building indexes on top of schemas – we had almost 8-9 collections, some heavily populated while others potentially. We had built indexes on all collections. Mongoose allows you to create compound indexes very easily. Further, one important factor to address while using indexes is the final query that is fired to the database.

  Covered Queries

While indexes do allow you to prune down the search, the query however should be in tandem with the index built. We have to ensure the projections (if present) we need fit into the indexes created else the queries do not the index at all and a document scan is performed by the engine. Such queries wherein the fields referenced all are a part of the index are called Covered queries.  For e.g. a small, yet important fix made was to drop the _id field (for our case, we didn’t need the field) as that field was not a part of the indexes built.

  1. Validation on schema
  2. Middleware are functions as a part of the document flow control like init, validate, save and remove
  3. Population – when we need to reference other collections as an attribute within our current collection we can use population. For e.g.:

var PersonSchema = {

name: String,

age: Number,

address:  [{ type: Schema.Types.ObjectId, ref: ‘AddressModel’ }]

}

So, our PersonSchema contains generic information like name, age etc. However a person could have multiple addresses and thus should be stored as an array. But we can create our AddressSchema and thereby an AddressModel and refer it in our PersonSchema (very common features in ORM if we recall).

Refer this link for further details about Mongoose.

Conclusion

It’s always about choosing the right tools for the cause. Hope my experience helps you in your endeavor to build quality web apps. I would like to hear from your side if you would like to append any technical facets which you might have encountered/used in your development process.


         

Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo Rachit Mathur
      Rachit Mathur

      Very useful for people building similar applications.