Mostly we web developers write code with the intention to satisfy almost all browsers. Albeit, sometimes this intention is not turned into fruition. Recently I was working on a project and thus was decided to restrict the operability of the app to certain browsers only based on the set of features supported by the,. Hence we needed to filter, certain browsers out (based on feature compatibility) and allow only the whitelist of browsers we had with us.

Browser detect using User-Agent header

One naïve way of addressing the issue above is Browser detection using the HTTP header called user-agent. This header provides certain cardinal information about the connecting client to the server. And since time immemorial, this header has been used to determine the browser (platform etc.) used by the operating client.

For e.g.

for a chrome browser, the user-agent header could look like:

User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.27 Safari/537.36

For a firefox browser, the user-agent could look like:

User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0

For Internet Explorer 10, the user-agent could look like:

User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; BOIE9;ENUS)

Note that it’s quite easy to identify the browser used. Each of these user-agent headers contain precise information about the browser e.g. Chrome, Firefox, MSIE etc. This is all fine and (conceptually) robust. But enter Internet Explorer 11. For this browser, the user-agent is kind of tricky.

For Internet Explorer 11, it looks like:

Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko

Whoa ! Where is MSIE11 or IE* or any other information which could imply that the browser used is IE11? It seems it has been a very deliberate step to meticulously arrange the User-Agent header for the browser so that it could be treated like any other Webkit/Gecko based browser. But “… like Gecko”?? So is it that IE11 will follow footsteps of -moz extensions now? Now, it’s intended that the servers do not digress off the content rendered on otherwise stated MSIE* user-agent loaded browsers and just load the same content that they would have otherwise had for a Gecko based browser.  I do not understand the reality behind this stealth mode. And for being right or wrong… this debate will be endless.

Well, one more reason as to why sniffing User-Agent headers for stratification is erroneous.

User-Agent spoofing

User-Agent spoofing is the process wherein we alter the user-agent string for our requests. Yes, browsers allow that. Now, once done that we can send requests to a server with our new designated user-agent header. Now, User-Agent although is just one piece of the puzzle of detecting client information, it still provides considerable information about the client. Plethora of extensions (and websites as well) exist for Chrome and Firefox to achieve this functionality. Hence, this is one of the major reasons for reluctance of browser detection using User-Agent.

Applet on the client

Oh well you can run applets on the client to extract a lot of information about the client.

Feature detection

A better way of restricting apps to certain browsers is to go for feature detection i.e. features supported by the browser. For e.g. if our app needed WebSockets, then checking for WebSockets support in the browser is a better way than checking if the browser is Firefox or Chrome or IE10+.

if(“WebSockets” in window){

// you are good to go…



//not supported


Not long ago, I had this requirement of rendering WebP images to browsers who support it and normal JPGs to others. Browser detect would have anyway worked but still, as user-agent can be spoofed thereby breaking the app, I used feature detection.

function checkIfWebPisSupported(){

var imgUrl = “/files/images/sample.webp”;

var img = new Image();

img.src = imgUrl;

img.onload = function(){

//bingo… we support WebP



So, we try loading a sample WebP image onto the browser and check if the onload event is triggered or not. If yes, then the browser supported WebP and if not we render JPGs. Well, it does make a small compromise by sending one extra HTTP request for an image, but ultimately for a very small sample WebP image, if the browser did support it then the final rendered image would be much smaller than JPGs.


Object/Feature detection is corroborated and works. It’s high time now that we give up on Browser detect and also for the fact that IE11 has decided to play Bond.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply