Skip to Content

Back to the future

Having been in this industry for some time now, I can’t help noticing things cropping up again every so often. It would appear that, as in fashion, if one can’t invent something new, one can always take an idea from the past and simply wrap it differently. A good example in this industry is the mainframe. How many times has it been said that the mainframe and the central model are dead, and that decentralisation is the way to go? Meanwhile Sun made an attempt with their (virtual display) thin clients. And of course there is now Web 2.0 and the online office suits.  

            The  mainframe isn’t dead btw. IBM is even building new  ones  ( for better performance in virtual worlds like Second Life (which is  another example of centralised apps).

You might say that I’m comparing apples and oranges. No, I don’t mind these evolutions, and yes, the technologies are different (no longer real dumb terminals to name at least one). But the philosophy remains the same: no thick client and thus no extra software on the client. Taking the bend too sharp, I would say that it’s old wine in new bottles.

            I had the  same feeling when I came across this web log the other day. The

web log

explains how to bundle your CSS and Javascript in order to improve  performance. Speak of déjà vu! Maybe all the broadband connections that are now  available have let us forget the time when one had to manage with a 9600 modem.  It was a hard time as a web developer to develop a web site that contained something  more appealing than just plain text, and yet was still viewable by the end user  without them being obliged to consume litres of coffee and to run up a huge  telephone bill. I’m sure that interlaced GIFs and progressive JPEG will ring a  bell for those of you who were active in the early days.


Do the request

What’s it all about? The default browser/server behaviour in original HTTP/1.0 protocol description closes the connection after the completion of a request. This results in at least 2 RTTs (round-trip time) being required for fetching every object:

1. TCP handshake

2. request/response

That means a lot of requests e.g. for a page containing 10 graphics, a CSS include and JS file. Things changed in 1999 with HTTP/1.1 when parallel requests became possible. Despite that, browsers still don’t load all of the elements of a page in one go. It all depends on your browser settings and whether they follow the standard guideline of downloads in parallel per host name. You can check (and eventually modify) things by yourself.

In Microsoft Internet Explorer, look in regedit for the key HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionInternet Settings

and the DWORD MaxConnectionsPerServer.

In Firefox,  look in the about:config for



Bundle it

Several “solutions” for better performance have been launched over the years. The funniest was to decrease the number of components to an absolute minimum. That reminded me of a tip on how to reduce your phone bill in some miser’s guide which was – don’t use the phone for making calls, make sure to get called.

The best method is to make sensible use of components. The earlier mentioned web log concentrates on bundling all CSS and JS components into the calling HTML page.

Why doesn’t one put all the code in the HTML straight away? Indeed there is no objection to that. If you want to prevent clutter or want for example to reuse, or group, code, you will need to put the code in separate instances. So you need to find a way to bundle the components in such way that the code only requires one request. In the afore-mentioned web log, one can fix things with a module for the Apache server.

Within the BSP environment one doesn’t need to go as far as to create server code. In fact, if desired, one doesn’t need to code anything.



The first prerequisite is not to import the CSS and JS files as MIMEs. MIMEs will be considered as separate objects and will always be loaded (if not available in the cache) separately. In fact, I would even recommend uploading only binary content (jpeg, gif, flash, etc) as MIME. All the other content should be considered to be normal BSP pages or page fragments. It’s a common misunderstanding that with BSP pages and fragments only BSP/ABAP code should be provided. You can code anything in them. It has the advantage that you don’t need to download, edit things in a separate editor and upload it as MIME again whenever you want to modify something in the CSS or JS code.

Choosing page fragments would solve the bundling immediately. Page fragments will result in one page needing to be retrieved. I’m not so keen on this solution. To be honest, I hate page fragments when it comes to coding (certainly when a fragment is used in multiple BSP pages) and debugging (in production environments).

Therefore, I looked for a solution which retrieved all the files needed to be included in a BSP page. The result is an application method with two import parameters:

  • the name of the application

  • a string with all the names of the JS and CSS files, separated by spaces.

The returning parameter is a string containing all the JS and CSS content

The code looks like this:

METHOD load_it.

DATA: p_key TYPE o2pagkey,l_layout TYPE o2pageline_table,

p_data TYPE REF TO cl_o2_api_pages, wa TYPE o2pageline, 

pages TYPE TABLE OF string, wp TYPE string.

 CLEAR content.

 p_key-applname = application_name.


Then in order to split the names string in to a table.

  SPLIT page_names AT space INTO TABLE pages.


We loop round that table

  LOOP AT pages INTO wp.


We retrieve the ‘page’ content. That means that the JS and CSS are stored in BSP pages.

    p_key-pagekey = wp.

    TRANSLATE p_key-applname  TO UPPER CASE.

    TRANSLATE  p_key-pagekey TO UPPER CASE.

    CALL METHOD cl_o2_api_pages=>load_with_access_permission


        p_mode    = ‘SHOW’

        p_pagekey = p_key

        p_version = ‘A’


        p_page    = p_data


        OTHERS    = 1.

    CALL METHOD p_data->get_page


        p_content = l_layout


        OTHERS    = 1.

Each retrieved page is stored in a table, so we need to loop over it and concatenate the table row to the result, with a new line after each row

    LOOP AT l_layout INTO wa.

      CONCATENATE content cl_abap_char_utilities=>newline wa-line INTO content.





The results

As you can  see, it’s rather simple code. I’ve done this on an application level, but one could  consider making a  BSP extension  ( of it. It’s a bit out of the  scope of this web log to explain this. The afore-mentioned help, and the  numerous web logs on SDN concerning this matter can certainly help you further.

Now what is the performance gain with this method? Well, I first made a page that included 2 CSS files coded the classical way.

Then I measured the response times with Firebug  (Firefly Dreams):

          <p><img  height='57' alt='image' width='600' src='' border='0'/></p>
          <p>After that,  I made a new page which used the application method.</p>
          <p> </p>
          <pre><p>main_bis.htm</p><p>           </p><%@page language="abap"%>
   <style type="text/css">
   <%=application->load_it( APPLICATION_NAME = runtime->application_name PAGE_NAMES = 'h1.css h2.css' )%>


Header 1


Header 2



h1 {

font-family: Georgia, “Times New Roman”, Times, serif;

font-size: 20px;

font-weight: bold;

color: #000000;



h2 {

font-family: Georgia, “Times New Roman”, Times, serif;

font-size: 18px;

font-weight: bold;

color: #000000;


The result is:


You must be Logged on to comment or reply to a post.
  • Hi Eddy,
    these numbers sure look impressive, but there is much more to performance. I’m sure you are aware of theses additional facts, but I felt they should be mentioned here as well.

    Round-trip performances only have an impact if there are round-trips to make. When using web applications, most of the MIMEs should only be loaded on the first page request anyway, any further requests are very likely to access the browser cache.

    Your example also states a highly theoretical example, as no real applications have such small file sizes. Just take the SAP standard CSS file for Design2003 (or any other), or the “sdn_general.css” that is used for displaying this very weblog page, and see how your HTML file would grow in size if you were to include the whole file in your HTML page. Having this additional content (being the same) in every response would quickly use up the otherwise saved time.

    While these arguments can be countered as well (for example smart mechanisms only including the needed styles), they provide enough “food” for conversation. For me, up to now, caching is still good enough to keep me from including styles in my HTML all the time. The only time for placing CSS in the HTML page is during development/debugging, to make sure no cache is used.

    Your mainframe/Web2.0 thoughts are agreeable, and it also might be more likely for requests to be more of a latency factor than bandwith. But bypassing the cache still doesn’t make much sense to me for web applications, let alone Web2.0ish sites.


    • Hi Max,

      Thanks for the additional thoughts on larger files. I guess that there doesn’t exist a magic ‘on fits all’ method. As said in the web log, testing things out is the best way to discover what suits best in specific situations.