Skip to Content
Technical Articles
Author's profile photo Volker Buzek

Testing UI5 apps, part 3.2: Code Coverage and other necessary Usefulities

Testing UI5 apps

In the previous parts, we set up Unit– & Integration Tests and used the Mockserver for simulating OData- and REST-Backends.
In this article, all aspects of the scenario are refined. If you want to tag along hands-on, the branch 03_mock-cov-useful got updated for this article.

from browser to automation

So far, running the tests was all browser-based. Meaning, you have to open a browser instance and (re)trigger tests manually at http://localhost:8080. This hinders a smooth developing experience: focus switches from editor to browser back and forth constantly.
To get the need for switching out of the way, let’s trigger tests automatically and run them in a headless browser. Fortunately the sample app already contains all of the base setup for this πŸ™‚
If you look at the app’s Gruntfile, the section starting with karma describes…well…the karma setup. Karma is a test-runner framework and SAP provides an official karma plugin for executing UI5 tests with it (using Chrome in headless mode).

basePath: 'webapp',
frameworks: ['qunit', 'openui5'],
openui5: {
    path: 'http://localhost:8080/resources/sap-ui-core.js'
client: {
    openui5: {
        config: {
            theme: 'sap_belize',
            language: 'EN',
            bindingSyntax: 'complex',
            compatVersion: 'edge',
            preload: 'async',
            resourceRoots: {'sap.ui.demo.todo': './base'}
        tests: [
files: [
    { pattern: '**', included: false, served: true, watched: true }
reporters: ['progress'],
port: 9876,
logLevel: 'INFO',
browserConsoleLogOptions: {
    level: 'warn'
browsers: ['ChromeHeadless']

In short, the karma server is launched on port 9876 and the UI5 sources are provided at localhost:8080.
The UI5 bootstrap settings in client.openui5.config should look familiar as they’re the same as bootstrapping any UI5 application. Note the ./base resource root – this is a special setting necessary as the karma server sets its’ document root to a virtual “base” directory that in our case corresponds to webapp/.
Then, first the Unit tests in sap/ui/demo/todo/test/unit/allTests are run, subsequently the Integration Tests in sap/ui/demo/todo/test/integration/AllJourneys.
The files section tells karma to not only serve all files from document root aka webapp/, but also listen for changes on them – so as soon as an edited file is saved, the testsuite is re-run automatically, making sure your coding doesn’t break any existing feature.
Go fire up a terminal, navigate to the openui5-sample-app-testing‘s root directory and run grunt watch to experience things for yourself.
Accompanying test runs while you are coding, guarding features. Neat, ay?!

test in multiple browsers in parallel

Somewhat contradicting the above headless testing, but for sure a real-world use case: running the tests in parallel in multiple browsers! (hey, who said “IE11”? πŸ™‚
Also possible with the karma-based setup and realized here in the watchMultiBrowser task:

grunt watchMultiBrowser --browsers=Firefox,Chrome,Opera,Edge,IE

would run all tests in parallel in Firefox,Chrome,Opera,Edge and IE (11).

The list of possible browsers is:

  • Safari
  • Edge
  • Firefox
  • Chrome
  • Opera
  • Edge
  • IE

Karma opens instances of the browsers in the background, so the development experience is not interrupted too much. Still, running the tests in multiple browsers of course takes time and should definitely be the excemption to the rule, e.g. when hunting for a cross-browser bug. That’s why I’ve included a grunt task for running cross-browser tests only once, not watching changed files:

grunt testMultiBrowser --browsers=Opera,Safari

Possible browsers options is the same as above.

code coverage

The testMultiBrowser grunt task provides a good transition to the next topic: determining how much of the app’s runtime logic is covered by tests aka “Code Coverage”. I don’t want to particiapte in the philosophical discussions about what percentage of code covered by tests is desirable, but rather focus on the technical possibilites in UI5-verse.

Both browser-based test runners (http://localhost:8080/test/unit/unitTests.qunit.html, http://localhost:8080/test/integration/opaTests.qunit.html) don’t show any option to check code coverage. Reason is that QUnit originally came with blanket.js for that purpose which is not actively maintained any more.

So for checking code coverage, the karma-based test runner is used, in turn utilizing istanbul for anaylsis.

A predefined grunt task is also available for this:

grunt coverage

will use Chrome in headless state to run all tests and display above pictured code coverage statistics for them.

Alternatively, you can specify a comma-separated list of browsers to use, as before for the OPA tests:

grunt testMultiBrowser --browsers=Opera,Safari

As an extra goody, the grunt task puts detailed code coverage reports per browser into /coverage/<browser-identifier>:

Opening index.html in each directory let’s you drill down reports into individual files, even highlighting the code lines (!) that supposedly still need to be covered by either a Unit- or an OPA-test.



If you’ve followed the blog series up until here, you should have a pretty good picture of the Ins and Outs of Unit- & OPA-tests and how to work with the Mockserver. This hopefully resulted in a tremendous amout of tests being written for your UI5 application πŸ™‚
After some time, there’s this one unit test in the middle of your testsuite that keeps failing. But in order to re-run that one specific test, all tests prior and after need to run as well every time, nagging the heck out of you…until now.
From QUnit 2 on, there’s QUnit.only – swap the test’s signature QUnit.test with QUnit.only, and only that one test is run when using grunt watch or calling http://localhost:8080/test/unit/unitTests.qunit.html.

That’s the good news – bad news is, there is no opaTest.only() or opaOnly() (yet). (Hint: I’m on it πŸ™‚

opaTodo, opaSkip

From UI5 >= 1.60.1, you can use opaTodo to flag a test (and the implementation) as currently being in development. It will then not count as failed, but rather “on hold”
opaSkip on the other hand will not run the test at all, quasi a nice counterpart of commenting out that test entirely πŸ™‚

randomness, semantics

After having nailed down mission critical code with Unit Tests and covered essential business cases with OPA/Intergration tests, it’s time to cover the edge cases. It’s those states of an application that “should never happen”. But obviously, they do.
Upping the randomness of test cases helps discovering these anomalies. Chance.js is a library that helps creating randomness. A classic example of an often hidden edge case is the inability of an application to digest non-ASCII string input. ChanceJS can help cater test input for that, providing a string pool to choose pseudo-random input from:

QUnit.module("Todo List", {
    before: function() {
        var sPool = "???⌘???";
        sPool += "aΓΆΓΌΓ€ΓŸΓ„Γ–ΓœΓƒΓΓ€Γ–αΈΓ§αΈ›Γ©Γ¨Γͺ";
        this.sRandomUnicodeString = chance.string({ pool: sPool });

opaTest("should add an item", function (Given, When, Then) {
// ...
        .iShouldSeeTheItemBeingAdded(4, this.sRandomUnicodeString)

Rounding up the usefulities and this blog post is the excellent “UI5 smart mockserver“, whis is part of UI5lab (a community-driven repository UI5 custom control library).
The smart mockserver extends the mockserver‘s ability to generate random data to provide semanctically more meaningful data. So instead of “Name 1, Name 2, …, Name 100” you’ll get actual names such as “Toby Moore, Mona Thiel”.
It does that automatically by looking at the entity property’s annotiation in metadata.xml. If there are no annotations, you can configure what semantic data should be related to what OData entity property:

entityName: 'Employee',
properties: [
    name: 'FirstName',
    fakerMethod: 'name.firstName'

This post sums up the two-split part 3 of the blog series about “Testing UI5 applications”.
Part 4 will essentially be a deep-dive on certain testing capabilites with UI5.
Part 5 holds real-world numbers and the business impact on utilizing tests in UI5 apps.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Ian McCallum
      Ian McCallum

      Really great detail, and pertinent topic. One question: are you running the coverage locally, or on via the Web IDE?

      Author's profile photo Volker Buzek
      Volker Buzek
      Blog Post Author

      Running everything locally πŸ™‚

      Also, finally blog post part 4 in the works.

      Author's profile photo Ian McCallum
      Ian McCallum

      Thank you for the reply! Looking forward to Part 4.

      Author's profile photo Dusan Sacha
      Dusan Sacha

      Hi Volker,
      thanks for this series! Very helpful! I am looking forward to part 4.

      Are you planning to switch to UI5 Build Tool? Or was there any specific reason why you used Grunt?

      Author's profile photo Volker Buzek
      Volker Buzek
      Blog Post Author

      Hi Sacha,

      sorry for the delayed reponse, was on vacation πŸ™‚

      no specific reason why I used grunt - when the blog series started, the ui5-tooling was still beta and the demo app used grunt; so I was simply sticking with it.

      At the time of this writing, the ui5-tooling is still missing some (from my developer perspective, so YMMV) fundamental features such as auto-reload upon file changes (so no need for cmd-/ctrl-R) and reverse proxying. So until these get introduced, I will not update my fork of the sample app to the ui5 build tooling.

      But I'm aware that with commit, the app got officially migrated to the new build tooling.

      Best, V.