Skip to Content
Personal Insights
Author's profile photo Bärbel Winkler

Building a custom-made test tool with Atlassian’s Confluence and JIRA

In case you came here expecting to find a post about ADT or ATC or the like, I apologize for utilising a misleading tag. Although what I’d like to share in this blog post is about testing, it’s not about any SAP-provided tool or technology. Instead, it’s an example for how to utilise Atlassian’s Confluence and JIRA to build a tool for tracking the testing of large projects, like for example the big upgrade to NW 7.50 with EHP8 and HANA-DB we recently did.

Setting the stage

Several years ago we did a big Unicode-conversion project in an MDMP-system environment with all sorts of “fun” aspects to it: dozens of languages, including double-byte for Chinese and Japanese, code-page conversions, many texts in tables where the language of the content didn’t necessarily fit the language-key (if one even existed). With the help of some tool programs and lots of manual effort we build our own vocabulary to have that as ready as possible for the big-bang conversion of 5 (!) production systems over a long weekend.

Source: XKCD – https://imgs.xkcd.com/comics/unicode.png

Needless to say, a lot of tables and therefore processes were affected and needed to be tested and not just in one location but in more than 50 countries around the globe. We didn’t have any tool to help with that just a large number of individual Word-documents spelling out one testcase each. This could be something as simple as “print any kind of SD-document and make sure that the result is readable in your language” to more complex testcases, requiring a string of activities, like “take an SD-process from order entry via printing of the delivery document, to invoicing and finally to the FI-posting”. Each document contained some information about the expected test-result as well as boxes to tick for “testing successful” or “issues found”.

These documents were put into a large zip-file and sent out via email to the subsidiaries with instructions to run the described scenarios in a test-system and report back the results. I’m sure that you can imagine how well that worked and what the feedback we received looked like! In some cases we just received a zip-file back with updated Word-documents but no summary indicating what had worked and what not. If we were lucky, somebody had actually filled out an accompanying Excel-file but mostly not. There was basically zero chance to really take stock of how the testing worked and we had to rely on the true and trusted IT-motto “no news is good news” – if something had gone really really wrong we would for sure have heard about it!

Looking for alternatives

Not long after the Unicode project went live successfully,  we had to start planning for a big system upgrade involving a switch to another database and a move to another datacenter. It was decided early on that we needed a much better way to organise the testing as we didn’t want to send out zipped-Word docs again with almost no chance of properly keeping track of the results. We didn’t have any tools readily available like SolMan which – IIRC – does have something to support testing. Eventually this search ended up on my desk.

Right around that time, I had just set up some testing for one of my spare-time activities involving climate change where I had utilised Google-forms to collect testing feedback about new/enhanced website functionality. The neat thing about Google-forms is, that the results automatically end up in a spreadsheet which allows proper filtering to e.g. look for reported issues. It also generates some dashboards with pie-charts and other graphics. Using Google-forms inhouse wasn’t an option but it gave me an idea of what to build making use of Confluence and JIRA.

Creating test documents and file lists

Based on the Word documents we already had with the test case descriptions I created a simplified template which only contained fields for a test-case-ID, a title, the module, the description of what should be tested and the expected test-result. The descriptions should – in most cases – be fairly generic so really on a high level and with the expectation that key-users who know their regular tasks and transactions would actually do most of the testing. The recommendation for example was to just state: “generate an invoice for a standard order” but to not mention any specific customer or material numbers as these would be different dependent on where the testing would actually happen, both regarding the system in which the test occured and the country / sales-org it was for. In addition to the “what should be tested” the expected test result also needed to be described, again in a fairly generic way like “the invoice was generated properly and the financial data posted okay.”

Example of a test-case document

While folks in our process teams were busy creating the files, I set up a new space in Confluence as our test-repository. We started out with one file-list for each module to which the relevant Word-documents were uploaded. Later, project specific file lists were added to the mix. In most cases, the document’s name corresponded to the test-case-ID (e.g. SD001 or FI010) to make identification easy and to keep it short. The neat thing about attached Word-documents in Confluence is, that you can open them in a preview which leaves most – if not all – formatting intact. And, the preview-link is fixed so can be used elsewhere to point at the document, an essential feature for what we had in mind.

Creating a JIRA-project with a simple workflow for testing

We wanted to keep test-documentation for the key-users as simple as possible, so the JIRA-project only has five main states: “Open”, “In Progress”, “Resolved”, “Issue Resolution” or “Closed”. Here is a graphic of how it looks like and which status-changes are possible:

Each testcase contains fields to identify it like module (e.g. SD, FI, MM…), testgroup (e.g. IT, country, department …), testtype (e.g. system test, UAT, go-live ….). Instead of a complete description, a test-case only contains a link to the preview-page of the corresponding Word-document. And this is one of the biggest advantages of this setup especially for very generic testcases which need to be executed by dozens of testgroups and potentially at different times in the project: should we find that a description isn’t clear enough or needs to be updated for other reasons, we only have to do this once, right in the Word-document uploaded into our test-repository. Anybody who accesses the link to work through a testcase, sees the latest version and not something outdated.

In addition to the Workflow, I also set up an overview dashboard via which it was easy to see how we were doing with the tests. It was then easy to focus on the tests ending up in “Issue resolution” and basically ignoring all which worked as expected. I don’t even want to think about how much effort keeping track of testing via e.g. Excel would entail – and not just once but throughout the testing with a “poor soul” having to keep the results current manually and then generating overviews via pivots or what-not.

Uploading testcases into JIRA and leveraging testcases from project to project

For a big project with a global impact, we can easily have 500 to 1000 individual testcases (if not more!). Creating these cases individually in JIRA would be a big – and timeconsuming – nuisance. Luckily enough, JIRA makes it easy to import issues via various options. One of the easiest is to import a CSV-file with a defined format where the various columns are matched to the fields needed to create a testcase. So, we do need an Excel-file listing the testcases but these are not that difficult to create and a file used for project “X” can be re-used for project “Y” later as a starting point. Uploading a couple of hundred testcases doesn’t take longer than a few minutes.

Granted, for the very first tests, there was a lot of work to be done to define and describe the testcases. But this gets less and less of a burden as time goes by and other projects need to be tested. So, the testcases we used for the EHP6-upgrade could be used as a basis for testing EHP7 and those for the EHP8-upgrade – always just enhanced by testcases for new functionality becoming available in between projects.

Over time and as more and more people get exposed to and use JIRA to document test-cases, the smaller the learning curve for each project becomes. And, in order to make “onboarding” new testers as simple as possible, I also created step-by-step instructions for the various aspects of our test-projects. These are also and readily available in our test-repository space in Confluence.

Over to you

How do you track testing especially but not only for large projects?

Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.