Skip to Content

Several weeks have passed since the last post in this series and so I guess it’s about time that we continue our little project here, don’t you think? And why not pick up where we left off and bridge that gap between service layer and user interface? Sounds like a plan?

As you may remember from my assessment blog post the UI of the original application wasn’t really appealing neither from an aesthetic nor and functional point of view. While the first may lay in the eye of the beholder and one could argue that a good-looking user interface is not really required for a sample application, the second point really needs to be fixed. Fortunately fixing the later also enables to improve the former – which inspired me to the title of this blog 😉 .

But before we get started, let’s quickly recap on what I’m complaining about regarding the original design:

  • there’s no clear separation of mark-up and representation (e.g. inline CSS statements)
  • incorrect semantical usage of html tags (e.g. input fields w/o corresponding labels, usage of <select> instead of an (un-) ordered list, …)
  • hard-coded JavaScript event handlers and mandatory JavaScript = no graceful degradation

Come on… it’s 2013 and given the strong promotion & adoption of HTML5 we should really try our best to demonstrate best practices when it comes to modern web application design. As ironic as it may sound the best way to achieve this may just be by starting all the way at the beginning and developing the user interface with old-fashioned technologies and THEN applying progressive enhancements.

The good ol’ times

Back in the days the entire model-view-controller (MVC) pattern used to build user interfaces was entirely developed on the server side and the only HTTP verbs being used where GET and POST (via HTML forms and corresponding <submit> buttons). Then, there was this whole AJAX movement and DOM manipulation via JavaScript to enrich the experience and to make it more desktop-like. Sure, this approach resulted in flicker-free updates as only parts of a given HTML page were replaced/refreshed. It also helped to preserve bandwidth as only deltas were transferred back and forth. The downside of it was the fact that a hard dependency to JavaScript (or worse technologies!) was introduced … oh, and the “Browser Back button” wasn’t working anymore. 🙁

Fast forward to today we see a new breed of web applications, in which the whole MVC model is handled on the client-side and the server is reduced to a pure data provider. Typically data is exchanged via REST using JSON. So called Single-page applications have become state-of-the-art. Given the performance of today’s devices and the power of modern browsers paired with the capabilities provided by the mix of HTML5, CSS3 and JavaScript jQuery we get a a much better web experience than a few years ago. Fair enough… as long as these technologies are used the right way!

Doing it right

One of the most thought-provoking reading I stumbled upon lately has been ROCA (Resource-oriented Client Architecture), a “collection of simple recommendations for decent Web application frontends.” I strongly recommend to read it – it’s only a single page to start with. I believe it contains a lot of valid arguments and therefore I decided to include them and demonstrate how-to implement them.

Enough of the talking, let’s get started!

Introducing composition views

A good way to ensure that your application remains flexible and maintainable over time is to apply a modular design. That holds true for user interfaces as well. Generally, most web site layouts can be broken down into several components: header, top-level-navigation, main content, sidebars, footer… you get the idea. We may also want to separate technical components such as meta information, JavaScript and CSS declarations.

A great way to achieve this is via a template/composition library such as Apache Tiles. Here’s the official explanation of what the framework offers:

“Tiles allows authors to define page fragments which can be assembled into a complete page at runtime. These fragments, or tiles, can be used as simple includes in order to reduce the duplication of common page elements or embedded within other tiles to develop a series of reusable templates. These templates streamline the development of a consistent look and feel across an entire application.”

Integrating Tiles

The first thing we need to do is obviously to reference Apache Tiles within the pom.xml file. Then we introduce a default template/layout following the concept explained above: default.jsp. The definition of the individual components and pages are done in a tiles-defs.xml file. Of course, we also need to configure the Spring MVC framework to use Tiles, this is done in the servlet-context.xml file. The last remaining step is to separate the logical components from the original home.jsp page into corresponding fragments/tiles. (The complete commit log for the Tiles Integration can be found here in case you want to see all the related changes at one glance.)

Going all the way

Now that we have a clean structure in place we can address some of the original drawbacks. First, we eliminate all the JavaScript coding altogether (we’ll add some later on again, yet we first want to make sure the application also works w/o JavaScript.) Next, we update (and rename) the original home.jsp file to accommodate to the new model (we introduced back in chapter 5). The new contact management page is called contact.jsp. Of course, we need a corresponding controller on the server side that handles all the GET/POST request – ContactController.

Responsive design

One of the key requirements for a modern web application is that is has to look nice on a variety of devices ranging from HiRes desktops, to tablets to mobile phones. That’s easier said than done! One way to achieve this is to create multiple layouts – one for each device type. Often this is done by evaluating the user-agent information and then redirecting to the corresponding layout or returning designated stylesheets. That approach has several limitations as it requires to maintain multiple versions of your web application for all the device types. Furthermore, user-agent information can be overwritten. A much more promising approach is to use so-called media queries within your stylesheets, which will apply the appropriate stylesheet declarations based on device specific media features. This way, you maintain a clean separation between markup and presentation and a single layout for all type of devices. This is known as responsive web design.

Introducing Bootstrap from Twitter

Still, creating such a responsive design is far from trivial! The good news, you don’t have to start from scratch, but you can build on the great work of others. A very popular responsive web design framework these days is Bootstrap from Twitter. Sure, there are thousands of web sites that are using it and so the look and feel of our application won’t be unique, but that’s still something we can address at a later point. After all, form follows function, right?

So, here is the corresponding commit log of the changes required to integrate Bootstrap. The most important changes are listed below:

  • updated the meta information to include a “viewport” declaration for mobile devices (meta.jsp)
  • created a component for our top-level nagivation/header (navbar.jsp)
  • updated the stylesheet tile to include the Bootstrap stylesheets (stylesheets.jsp)
  • updated the JavaScript tile to include the Bootstrap JavaScript file (js.jsp)

Note: In the meanwhile I have updated to version 3.x of Bootstrap. The corresponding commit log can be found here

Outro

With that we have a fully functional user interface again that renders nicely on all sort of devices. Sure, it’s still a bit old-school and triggers a complete re-load of the entire page with every interaction, but we’ll fix that in the next post of this series when we’ll look into applying progressive enhancements

Hope to see you around next time… have fun coding!

PS: Here are two pictures illustrating the user interface before and after applying the things mentioned above:

/wp-content/uploads/2013/10/ensw_granny8_before_290384.png     /wp-content/uploads/2013/10/ensw_granny8_after_290385.png
To report this post you need to login first.

8 Comments

You must be Logged on to comment or reply to a post.

  1. Chris Paine

    Nice!

    I wonder though about worrying about supporting any device that does not support JavaScript. There aren’t that many browsers that don’t support it. http://en.wikipedia.org/wiki/Comparison_of_web_browsers#JavaScript_support

    Pushing a JSP HTML view that is going to have to be so different to the JavaScript enhanced page just to cater for what must be a very rare use case, means a reasonable amount of UI work and maintenance.

    Why bother?

    Great to see another Granny post – interesting to see use of both Spring MVC and JAX-RS.

    Now just to solve why JPA isn’t creating my repository class from the interface and I might be able to catch up with you!

    Looking forwards to next post.

    Cheers,

    Chris

    (0) 
    1. Matthias Steiner Post author

      Hi Chris,

      always good to have you critically reflect upon my posts 🙂

      I wonder though about worrying about supporting any device that does not support JavaScript. There aren’t that many browsers that don’t support it. http://en.wikipedia.org/wiki/Comparison_of_web_browsers#JavaScript_support

      Pushing a JSP HTML view that is going to have to be so different to the JavaScript enhanced page just to cater for what must be a very rare use case, means a reasonable amount of UI work and maintenance.

                         

      Excellent remark and you raise an important topic here: is it worth bothering? Fair question!

      I guess it greatly depends on the individual scenario and maybe it’s safe to say that it’s not worth bothering to support browsers without JavaScript capabilities. The thing is though… there are other user-agents out there! For example, what about command lines tools like curl, SEO spiders, integration/unit testing frameworks or client libraries such as Apache HTTP Client! Especially in context of the “internet of things” or (Enterprise) APIs there may be valid reasons to not tightly couple your application to JavaScript and dynamic DOM manipulations on the client side.

      I mentioned ROCA in my post; they provide a whole lot more of reasons why business logic should remain on the server side, where it is in your control. I think that is especially wise in the context of cloud applications where you want to be able to roll-out new features frequently. Having to worry about updating, upgrading or patching client side business logic is something that may haunt you in the long run.

      If done right, progressive enhancements are not that much of an overhead as I’ll try to demonstrate with my next blog. That’s not to say that it’s the only way to do things, nor the best way. It’s just that it may result in a cleaner and more extensible design in the long run! In fact, that has been the guiding motto of this series: it’s best to get the basic architecture right from the beginning as with cloud applications you want to have a solid fundamental that helps you in continuous refactoring and adding new features.

      Of course you can (and probably should) use JavaScript on the client. It’s just that many times it’s not done right and people concentrate more on great visual effects that on getting the fundamentals right. I’m thinking about stuff that greatly matters in an enterprise context such as accessibility (e.g. screen-readers), i18n and not breaking the browser navigation (e.g. “back-button”, bookmarking,…).

      Cheers,

      Matthias

      (0) 
      1. Chris Paine

        Thanks for responding to my questions Matthias, it’s always great to have my ideas and preconceptions challenged, and often changed 🙂

        In the internet of things example, would you not expose an alternate consumption for the same address? Request the same address but ask for JSON-LD representation? HTML is for human consumption (big assumption there!), there are much better ways to consume a site if you were a bot? Yes, this perhaps means an alternate “UI” for non humans, but not sure degrading to flat HTML helps anything other than solutions that use screen scraping to get data. And an application that’s built right surely shouldn’t support that?

        Not sure SEO should be a consideration for an enterprise app, and if the testing frameworks you are using can’t test the real user experience, they probably aren’t worth using. Or at least that testing should be moved down the stack – not sure testing at a level that is neither granular enough to show you where the code is broken, or high enough to mimic the full user experience is really worthwhile. Although I’m sure Matt Harding would say that’s pretty much where I’ve written all the unit tests in the recent pile of specs that I’ve had to write for him 😉

        I’d agree that business logic must be server side – else a malicious user can manipulate the site to circumvent the logic, but that still IMNSHO doesn’t mean you have to support fully degradable websites. Although given that in the enterprise context we still are often forced to support IE8, perhaps it should.

        I read a great article the other day which I think has a tiny bit of pertinence here:

        http://chaosinmotion.com/blog/?p=622

        read it if you have time, but the general point is that very often we create frameworks for extension without considering the consequences.

        However, this all said, I’d probably try to aim for ROCA compliance but without the fully degradable JS requirement. AJAX for retrieving data for the application is such a powerful way of enhancing an application experience and decoupling the business logic from the presentation layer that I’m not sure I’d like to try to present the same site without the ability to use it. This should not mean that accessibility/navigation etc should be hampered. Indeed, I try to code such that all visual interfaces are degradable (which leads to some pretty ugly loading screens when you load my websites on an iPad which loads the HTML and displays it before the JS has a chance to fix the layout.) But I think combining the data access and display logic (JSP) is not perhaps such a good idea, especially when one considers low bandwidth mobile applications where the ability to cache the view components (and potentially data) and then asynchronously load the data means a much improved user experience.

        But as you say, there are not right and wrong answers, just different ways of doing things which may be better for different situations – and the joy of the HANA Cloud Platform is the open standards environment that allows us to do all of them!

        I look forward to your next granny post, which will I’m sure correct the errors of my ways 😉

        Cheers,

        Chris

        (0) 
        1. Matthias Steiner Post author

          I totally feel the same… I am always interested in hearing other people’s opinions (especially yours) as it challenges my current point of view … and that’s how we learn (= get better!)

          You’ve touched upon a variety of important topics and I guess I won’t be able to comment on all of them in great detail. I’ll try though.

          Up front, YES… there is a valid exception to every rule, but the whole approach I want to promote here is to learn the rules first, so that you know when it’s best to make an exception. I’m completely fine with grown-ups making exceptions, yet sometimes people just develop stupid things, because they simply don’t know any better and it has become so easy to do (AJAX + DOM manipulation via JQuery for example.)

          Yes, I agree… for non-human agents it sounds advisable to provide a dedicated API and most often they may be more interested in data, than data + mark-up together. But I would leave it up to the agent to request the “representation” he is interested in. If he asks for JSON, fine… if he asks for HTML, obey him.

          Maybe this whole discussion is more a topic of “state”. My primary argument is that if some user-agent request a HTML representation via an URL you should return the corresponding content. And all of it. Now, from a server perspective you should not be required to know whether or not the client already has 80% of that content available already or not. If the client knows that it only needs a delta it should be kind enough to only request that delta. (That’s what my next post will be all about.)

          SEO: Maybe I too a short-cut here, but I was more concerned with indexing, crawling etc. The worst you can do is to hard-code dynamic delta updates without updating the URL, because it breaks bookmarking and browser navigation. Fortunately HTML5 provides the means to fix that via the History API.

          OK, enterprise apps may not be the best example for such a need as you rightfully pointed out, yet I was more thinking about proper web application development in general.

          So, I’m not saying that every web app developed today needs to work w/o JavaScript. All I’m trying to say is that it is still possible to develop applications that work w/o JavaScript. And it’s not too much of an effort. In our example above the only things required were a couple of controller methods to handle the multiple submit values (e.g. to add another email or phone number field.)

          In the next post I’ll definitely introduce some JavaScript to add these fields on the client-side via DOM manipulation and add event handlers to overwrite the original submit handling.

          But I think combining the data access and display logic (JSP) is not perhaps such a good idea, especially when one considers low bandwidth mobile applications where the ability to cache the view components (and potentially data) and then asynchronously load the data means a much improved user experience.

                             

          Not sure I see the relation here. The ability to cache content (e.g. view components) is still given even if we use JSPs. I mean it’s the server’s responsibility to provide the necessary caching information (e.g. ETag) and then it should all work fine. You can also used mixed scenarios and deliver both markup + data (initially) and then let the client decide for himself: if he supports JavaScript, let him request the data only in consecutive request/response cycles.

          But as you say, there are not right and wrong answers, just different ways of doing things which may be better for different situations – and the joy of the HANA Cloud Platform is the open standards environment that allows us to do all of them!

                             

          Couldn’t have said it better myself – that’s as good as a closing statement as I could ask for!

          Cheers.

          Matthias

          (0) 

Leave a Reply