Skip to Content


With one of the latest releases, the SAP HANA Cloud platform is now providing the compressing feature by itself. You can check the updated blog post here.


Modern web applications usually rely on various front-end technologies that provide with rich end-user experience in the browser, like JavaScript files and frameworks, CSS sheets and others. Combining resources from so many places into a single HTTP response to the client may result in a heavy payload being sent to the browser. To make a better use of the network bandwidth and resources, and to decrease the size of the data being transported, a transparent for the user compression mechanism is introduced in between. This article will shortly present with the HTTP compression topic and will provide with some instructions and recipes for how to enable it in your SAP NetWeaver Cloud application.

HTTP compression is a publicly standardized mechanism whereby a web server compresses the contents of the HTTP response before sending it; the compressed data is then being delivered to the client, where it is being decompressed and processed as usual (though more rarely, HTTP request compression could be used as well). It is available since version 1.1 of the HTTP protocol and is a simple and effective way to save time and network resources, and thus improve the end-user experience and customer satisfaction. It can significantly reduce the size of the web pages – up to 75% – and therefore provide the application users with better load times in their browsers.

Although the feature is supported by the majority of modern browsers, including mobile web browsers, in general one should be aware that still there could be browsers which do not work well with this functionality, or do not support it at all. In those cases the communication between the browser and the web server uses uncompressed data. Also, compression naturally comes at the cost of using additional CPU resources, but most of the times this is well worth it, considering the easy and fast way to improve the overall web performance that this solution offers.

The compression feature in a nutshell

The technology assumes that the client and server are capable of performing compression and decompression on their side, and requires from them a brief negotiation dialogue that goes like this:

  • the web client declares its support for compression with the Accept-Encoding header in the HTTP request:
Accept-Encoding: gzip, deflate

     Modern web browsers usually do this by default, so you don’t have to do anything on your side.

  • If the web server sees this header in the request, it may (but is not obliged to) compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response:
Content-Encoding: gzip

Gzip is probably the most popular compression schema nowadays, although other schemas and algorithms like deflate and compress are also used. Simply stated, the Gzip compression tries to find similar strings within a text file and to replace those strings temporarily to make the overall file size smaller. This form of compression is particularly well-suited for the web because HTML and CSS files usually contain plenty of repeated strings, such as whitespace, tags, style definitions and others. Image, music, video or PDF files are already compressed and should not be gzipped. In most of the cases compressing the “big 3” file types (HTML, CSS and JavaScript) should be sufficient. This is why the HTTP compression, also known as content encoding, is mostly targeted at compressing textual content, like HTML, JavaScript, CSS, XML, JSON, etc.

Getting started with Gzip on NetWeaver Cloud

Let’s take for example the explore-ui5 sample SAP UI5 application from the SAP NetWeaver Cloud SDK. Deploying that app without configuring any gzip compression, and requesting its initial page gives the following load experience:


And here is what Firebug shows for the GET request of the heaviest file, which also took the most time to be processed:


Let’s bring Gzip into the game now. For this purpose we will employ an open source third part library, called webutilities. It is Apache 2.0 licensed, and offers some nice features, like: easy to employ servlet/servlet filter mechanism which deals with the compressing; configurable set of mime types that are to be compressed; configurable compression threshold, etc. It also supports decompression of the HTTP request, if it is being sent compressed. We will stick to the default settings for the CompressionFilter given here. In essence this filter uses the standard JDK functionality in the package for performing the Gzip compression, which is to be found in classes like That’s why you don’t need to package any additionalgzip utilities inside your application. The filter supports also the compress and deflate encodings.

From the technical point of view, we add the filter inside the application’s web.xml descriptor as shown in the project site:

                <param-value>512</param-value> <!--  anything above 512bytes  -->

and add the webutilities-<version>.jar and yuicompressor-<version>.jar files in WEB-INF/lib folder of the application:


Redeploying the app now shows significant improvement: the static content of our application is delivered 2.5 times faster in a four times smaller response:


Response headers now include the information for the gzipped contents (the same file is again the largest part of the response, and has taken the most time to be delivered):


Many other solutions exist that are providing similar functionality, which from a technological point of view is implemented via the standard Servlet Filter/ResponseWrapper mechanism. Most of them are utilizing the same approach for performing the compression. Of course, you could perfectly try them out and incorporate them into your SAP NetWeaver Cloud application; webutilities is just our choice for the sample shown here.

A notable example for a library which is more flexible and feature-rich in the compression aspect, is the Apache Commons Compress project. It offers several formats for the compressed files that it produces and handles. It also carries its own implementation of the zip format, which they claim to provide capabilities that go beyond the features found in If you find yourself picky for the compression details, you may want to check this project closer.

Other types of clients

If your client is actually not a browser, you should check with its documentation for how to enable the Accept-Encoding header when sending requests with it. This should be sufficient in order to make use of the compression and its benefits. Although CSS and HTML files would normally make sense in a browser environment only, there could still be use cases where you could benefit from compressing the response; even if you are using a command line client for example (after all, saved time is saved time). In such cases you should keep in mind that you will have to take care for the decompression of the response instead of the browser. For example, on a Linux environment you could call wget and curl like that:

$ wget -O - --header="Accept-Encoding: gzip" <URL> | gunzip > output.html

$ curl -H Accept-Encoding:gzip,defalte <URL> | gunzip > output.html

Apache HTTP client has this example for how to use response compression. You may want to also check the DecompressingHttpClient class.

The case of HTTPS

There shouldn’t be any general considerations and obstacles to compress HTTPS responses as well. That is, encryption and compression of the response don’t have any general problems to work together, as our sample application here shows.

In conclusion

Taking advantage of the HTTP compression mechanism is considered a best practice and is recommended by many experts on website optimization. Moreover, it is being widely used by many popular websites, so it is quite likely that you are already utilizing it in your daily work, realizing it or not.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

        1. Krum Bakalsky Post author

          Thanks for sharing this. It really seems that the big number of requests still takes a lot of time, somewhat outweighing the benefits from the reduced contents of the responses.

  1. Robert Wetzold

    There seems to be one issue when using JSP files. We include some JS files directly to avoid additional requests using:

    <jsp:include page=”/WebContent/js/common_utils.js” />

    This will not work anymore since the webutilities library will change the content-type (probably a bug) to text/javascript instead of leaving it at text/html.

    The solution is to include the files differently:

    <%@ include file=”js/common_utils.js”%>

    1. Krum Bakalsky Post author

      Thanks, Robert! I have reported this behavior to the webutilities project. I guess that they could hint us on what is causing it.

    1. Krum Bakalsky Post author

      Thanks, Dmitry,

      for these comments on the topic. I guess that using response caching could serve almost the same purpose, as once being requested and therefore gzipped, a static resource could be later served from the cache, thus no additional overhead of unnecessary compressing will occur.

      We haven’t explored the particular ResponseCacheFilter functionality of webutilities though.

  2. Simon Kemp

    Thanks for this Dmitry, this is exactly what I was looking for. I’d like to add one thing that I found when I was deploying locally (for testing) on Tomcat 7 it complained that it could not find org.slf4j.Logger so I needed to also include the slf4j-api-x.x.x.jar

    This brought the size down from 2.5MB to ~750KB!



Leave a Reply