Skip to Content
Author's profile photo Michal Krawczyk

Michal’s Tips: Stop testing your interface scenarios – you’re not doing it anyway right ?

When we’re starting a project both functional consultants and developers are both responsible to describe a set of test scenarios which always need to be executed to check if the interface is working properly. Functional consulstants will put all important business scenarios which need to work and developers will update those with some cases where they know the interface is being developed in a complex way (multiple lines, summarizations and other complex mapping logic). Thanks to this cooperation we can get a pretty decent subset of integration scenarios which once run will make sure the interface scenario is working perfectly. Running all of the prepared test scripts needs to happen in a few project phases:

a) during the first integration testing phase (when the interface is being executed end to end for the first time ever)

b) after we implement each change to the interface scenario during integration testing, user acceptance testing and any other testing phase which may be performed in between those two but before the first golive

c) after golive when we need to fix any existing scenario or add any new functionality to it

How does that look like in reality (at least from my 12 years of experience with >25 clients) ?

a) during the first integration testing phase we need to check all possible scenarios, otherwise the interface would not work

b) after we implement each change to the interface scenario we’re usually in the middle of “rapid” development where everything needs to be finished ASAP and in many cases the development was already approved so testing is only run with a subset of the subset (maximum 1-2 testscripts)

c) after golive when we need to fix any existing scenario or add any new functionality to it the we have a few choices:

– hot fix – needs to be done immediatly (ASAP is too slow) – so we fix, run a test case and move to production (praying that it till not cause any failures to any other scenario)

– new functionality – depending on the possible lead time – a small change can either be implemented if the lead time is small (meaning we don’t test too much) or we don’t implement the change (as testing team needs to run all possible test scripts and it takes 10 days to do it so business realizes they can live without the change – sad but also happens)

What does that mean in reality? That we only have two choices:

a) we can either push for running all prepared test scripts but risk huge project delays or simply rejecting any changes to the existing interface scenarios

b) we can stop testing (vide articles’s title) and run one or two test scripts and keep on praying when we transport to production environment

What is the reason for that ? I’ve been asking myself the same question many times and I came into the conclusion that it’s because of lack of interface scenario testing tools. I’m not saying that they don’t exist, I’m only saying that they do not respond to the needs of both business and developers. What would those two groups need ? I’m hoping for your input for the same but let me just present my short list.

Developers:

a) being able to run a full set of interface scenarios tests with a single click after implementing each change without waiting for anyone else (especially from the business)

b) not having the need to going to any transaction/entry screen as the module knowledge cannot be mandatory to retest an inteface after the change

c) being able to test the interface both on development and on quality boxes (not only on quality after the change is transported)

Business:

a) being able to record a test script case from any existing document which was processed in the past and was posted correctly without the need to recreate it again

b) being able to be sure that all of the fields will always be validated (and not only the ones selected during the initial test script preparation)

c) test script execution in backgrund everyday validating all transports and changes done by the developemnt teams (as the latter can often change and may not be aware of what needs to be retested from te technical perspective)

Request:

Would anyone have any inputs on this topic ? It would also be possible for me to organize a session (SAP Mentor expert table) at SAP Teched 2016 (Barcelona or Vegas) if someone would be interested to discuss how to test/retest integration scenarios or to show how it’s being done at their company. I’d kindly ask you to provide any input if you think this is a valid but not that much discussed topic.

Important info:

If the testing process looks completely different then described please do let me know as I can only tell what from what I’ve experienced.

Assigned Tags

      12 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Sanjeev Shekhar Singh
      Sanjeev Shekhar Singh

      Hi Michal,

      I agree with you that automated regression testing is a pain point in most integration projects. We had once tried to play around with the SOAPUI tool to automate the regression testing to an extent. However, as I tried to build more regression libraries, I realized:

      1. Building a regression library in SOAPUI using groovy scripts and validating all fields was a bit time consuming and required almost similar amount of effort as the original development time of interfaces.
      2. Maintenance of regression libraries would need to be an ongoing activity as well, since the interface logic changes from time to time.
      3. It also requires some level of java/groovy/SAP PO knowledge to create those scripts and often not all testers will have that expertise. And developers are generally too bored when asked to build elaborate testing artifacts (personal opinion 🙂 )
      4. In some cases we had to by-pass the sender channels as well, as they required encryption/digital signature etc.

      Having said that, it was useful to have those testing scripts automated in a lot of scenarios.

      Approach, we had taken was to stub-out any systems where we did not have access to query end-point directly. For systems, where we could, we would let the interface run end to end and then query to see if the processing finished successfully. For example, at times we had to simulate ECC system proxy messages and not post them directly, instead routed them from/to SOAPUI and perform assertions in SOAPUI to see that interface logic was intact.

      Let's see if others have used different approach in their projects.

      Regards,

      Sanjeev.

      Author's profile photo Michal Krawczyk
      Michal Krawczyk
      Blog Post Author

      Hi Sanjeev,

      First of all thank you for your input 🙂

      I've also been using SOAPUI in some projects for the same thing but I have the same observation that it's not easy to create/maintain the test cases which is completely not taken into account while doing any changes to the interface so the quality of SOAPUI test cases would be sacrificed by doing less development alternatively doing more development would cause less quality of SOAPUI test cases.


      I see you had an interesting approach for "stubbing out" some of the systems in some cases. Any chance you'd be going to any teched this year so we could discuss this approach in more details ?

      Thank you,

      Best Regards,

      Michal Krawczyk

      Author's profile photo Sanjeev Shekhar Singh
      Sanjeev Shekhar Singh

      Hi Michal,

      Unfortunately, I am not attending any TechEd this year. But happy to continue the discussion here or feel free to send a DM.

      Regards,

      Sanjeev

      Author's profile photo Iñaki Vila
      Iñaki Vila

      Hi Michal,

      I agree that the tests in PI have not the same opportunities that for example in the SAP ABAP world. I want to add the migration processes in these days that the tests number can be huge and not all the scenario are enough documented to follow up an acceptable results. From my point of view, it will necessary a SAP tool to do this job easily. Until now in the project that i work i design a doc document to determine all the test necessaries, but i know this is a rudimentary way but all the scenarios have different particularities and to do scripts can't be easy or available all the times.

      Interest subject Michal, and i'm glad to read your comments again.

      Regards.

      Author's profile photo Michal Krawczyk
      Michal Krawczyk
      Blog Post Author

      hi Inaki,

      Thanks for your comment 🙂

      You've pointed out a very interesting case - test cases are not documented. On the other hand does that mean that they still exist (as IDOC, proxy messages, etc.) ? If so why is that not possible to use those to create a test script with a click of a button and not by redoing the complete scenario from end to end ? Missing test date is one of the fundamental things why interfaces changes are estimated so high in many cases. How can you say that a change will take 1h if you need to retest many "unknown" test cases during a few days (as we need to catch business guys who can create some transactions in external or internal systems). Fells like integration is being on hold because everyone is so afraid of damaging the existing flows. That's not good for business (as interface cost too much or are not being developed at all) and for developers (who cannot do the development but need to spend time on thinking how to perform testing in each project in a different way...).

      If you'd be interested in discussing this topic a bit more on any of the techeds (BCN or Vegas) do let me know, I can try to organize a table where we can share some more experience among us in this topic.

      Thank you,

      Best Regards,

      Michal Krawczyk

      Author's profile photo Vadim Klimov
      Vadim Klimov

      Hi Michal,

      This is a very demanded, but yet not very well formalised and automated area you raised. From personal experience, following challenges were commonly faced:

      - Identification of tests scope. Especially for changes introduced to an existing interface, it is not always straightforward to depict which existing functionality becomes impacted (there may be indirect implications), which original test cases can be re-used and which new tests have to be prepared. Another level of complexity was commonly faced in case some changed objects are re-used in several interfaces: traversing all affected interfaces and assessing impact on them can become relatively time-consuming task. So far, this was mostly done in manual mode with some help of where-used functionality when spotting re-used objects.

      - Evaluation of tests coverage. For some specific cases, it was possible to use automated tests coverage tools and embed them into entire infrastructure (for example, into static source code analysis tools or even continuous integration pipeline in tools like Jenkins), but I only succeeded to achieve this for limited scope of PI/PO developments, primarily for Java mapping programs which can be executed and tested in standalone mode (called from a standalone program and invoked against a set of source XML messages prepared in advance or taken from past tests). Potentially, XSL transformations, routing condition rules (thanks to their XPath nature and possibility to run logic of most of them in standalone mode) and probably adapter modules to some extent can be good candidates for this, but I can hardly think of solid and scalable technique fort automation of other areas in PI/PO, especially for those where source code / artifacts (e.g. graphical mapping, BPM) are auto-generated and where the developer does not directly interfere into them. For graphical mappings, we could sometimes achieve this goal by mapping versions comparison and depiction of technical changes using textual representation of mapping rules, but in many cases outcome of this exercise was not something that could be directly passed to any kind of automation tool.

      - Conduction of tests with the least possible distraction of / dependency on application teams. This becomes even more critical, if some parts of integrated solution are external services / cloud solutions, where it may be a challenging task to get necessary application experts involved in timely manner and satisfy testing schedule. If these are not connectivity tests or end-to-end tests involving real backend systems, simulation of a sender or mocking up receiver (for example, using already mentioned SoapUI for SOAP/REST) commonly works. For non-SOAP/REST sender simulations, JMeter helped me a lot, too. Additionally, for ABAP systems, specific sophisticated tools (like SPROXY for ABAP proxies and IDoc test tool for IDocs), helped a lot during unit testing and reduced application teams involvement.


      And situation when (integration and regression) tests scoping, coverage evaluation, simulators and test data preparation takes significantly more time and efforts than actual change implementation in PI/PO (which is not a rare case - e.g. 1 day for change implementation and 3-5 additional days for testing) together with lack of comprehensive automation tools to fit this gap and facilitate these tasks, raises concerns and sometimes causes change rejection by budget owners.

      I'm very curious on hearing feedback and experience of other PI/PO specialists since this is definitely a hot topic - and thanks to you for raising it here.

      Regards,

      Vadim

      Author's profile photo Michal Krawczyk
      Michal Krawczyk
      Blog Post Author

      Hi Vadim,

      Thank you too for the long description 🙂 I will try to organize a session at teched on this topic, use all of your inputs to start the discussion and will update the blog with the results later on.


      Also thank you for your estimation information - this just proves what I was saying that customers might not be willing to implement new interfaces or changes to the eixisting ones not because of the implementation effort but due to the regression/testing effort only... which is VERY bad from business perspective (also from the developers point of view...) so the better ways we know how to test quickly the more development work we might have 🙂


      Best Regards,

      Michal Krawczyk 

      Author's profile photo Daniel Graversen
      Daniel Graversen

      Hi Michal

      Testing is always difficult.

      There is the best practice that you have to follow but in no manner is practical. Nobody can get the correct test cases and if the business is doing testing they just do the basic test. Not what comes into the real world.

      I have also started on my own Testing product to make some of the regression testing a bit easier. It will make some of the retesting parts easier but if the testing approach is wrong then it does not matter.

      Looking forward to TE Barcelona.

      Daniel

      Author's profile photo Michal Krawczyk
      Michal Krawczyk
      Blog Post Author

      Hi Guys,

      I've booked a table at Teched Vegas for a discussion we can do on this topic.

      If you'd be around please drop by so we can have a discussion on this topic 🙂

      /wp-content/uploads/2016/08/wtorek_1017838.png

      Author's profile photo Vadim Klimov
      Vadim Klimov

      Hi Michal,

      It is a pity I will not be able to join this in Las Vegas - any chance you coming to TechEd in Barcelona this year? I will be there and there are few other SCN members who may be interested in discussing this topic.

      Regards,

      Vadim

      Author's profile photo Michal Krawczyk
      Michal Krawczyk
      Blog Post Author

      hi Vadim,

      Will try to schedule the same for Barcelona and will put the info here 🙂

      Let's see if anyone will come for the session in Vegas this week 🙂

      Best Regards,

      Michal Krawczyk

      Author's profile photo Michal Krawczyk
      Michal Krawczyk
      Blog Post Author

      Hi,

      Feedback from Vegas (hope to see some more in Barcelona):

      How to test your PI/PRO interfaces

      Best Regards,

      Michal Krawczyk