Skip to Content
Technical Articles
Author's profile photo Ian Stubbings

Centralised Transport Naming Standards – Branch by Abstraction

This is the 3rd blog post in the series Centralised Transport Naming Standards.

See also blog posts:

Centralised Transport Naming Standards

Centralised Transport Naming Standards – SCP ABAP Environment Migration

Centralised Transport Naming Standards – Service Now Integration

While migrating the original code from the first blog post to the SCP ABAP Environment, I decided to refactor it for several reasons:

  1. I was spending a lot of time in the code anyway, adding in the new data types so a fair amount of rework was already going on anyway
  2. I had decided that the code was unsustainable in its current form and I needed a solution to enable me to try out new features while maintaining a stable code line. (this is mostly due to the code being present on a central development system – where we host our Central ATC – and therefore ‘development’ is also ‘production’. (This type of approach is also mentioned in the section Trunk Based Development
  3. As it was completely isolated from the original code, it was an ideal opportunity

For item 2, I happened to mention this in an internal Slack channel and my chapter head steered me towards Martin Fowler’s Branch by Abstraction website. Jackpot! (Subsequently, I saw this approach suggested in the Trunk Based Development section of Christoph Pohl‘s blog post on CI/CD in ABAP – An Outside-In View).

This was exactly what I was after I started to refactor my code and so far I have ended up with a UML diagram like this.

What this boils down to is two branches, the Main Branch and the Feature Branch. The decision between which one is used is via a combination of the factory class and a config table. The Main Branch is the ‘live’ branch and is called by default in the RFC. The Feature Branch is called if the calling system ID matches that of any entries in a config table (ZCAU_TR_NM_FEAT). In this way, the branches can be switched instantly if an error is introduced in the new feature branch and the code has to be ‘rolled back’. Several key systems can therefore be rolled out to with a new feature without risk.

You may ask why I used an abstract class rather than an interface. A good question for sure. The answer is that an interface has no method implementations but I wanted to house the majority of methods in the abstract class so that they were common to each branch. The individual branches then only require to redefine the new feature methods while they are under development/test. The feature branch will have the new code whereas the main branch will just return an abap_true (if the method is implemented at all).

Below is largely the same information but also adding in how the connection is made to the satellite systems. The fact it is developed on our Central ATC system is just for convenience btw. There is no dependency on it, just a reuse of the RFC connections.

  1. Initial call to the RFC enabled FM Z_CAU_TRANSPORT_CHECKS from the satellite system via an implementation of the CTS_REQUEST_CHECK BAdI
  2. Factory class instantiated and create method called
  3. Main or feature branch object is determined depending on the config table ZCAU_TR_NM_FEAT
  4. Respective object passed back to the FM and methods called accordingly

You will notice that I have also ‘offloaded’ the database activity to a separate class. This is not via the ‘out of favour’ persistent classes but still provides a layer of abstraction nonetheless. I may also extract the logging mechanism to a separate class in the future.

In order to test the classes effectively, I have employed ABAP Unit testing, Something I have been keen on for many years but seldom get the chance to practice. Below I make use of an injector class to force the local test class to use the correct class, otherwise the factory class will return the object reference depending on the entry in the config table. This of course would defeat the point of unit testing the feature branch or main branch completely as it would test a mix of the two.

      zcl_cau_transport_injector=>inject_transport_checks(
        new zcl_cau_trn_nm_chks_feature(
          request = 'XXXK9B0CIJ'
          type    = 'K'
          sysid   = 'XXX'
          text    = 'XXX:R7:E000374:B2C:Defect 31313 Role to loc. type'
          owner   = 'XXXXXX' ) ).

      lo_cut_factory = new #( ).
      lo_cut = lo_cut_factory->create( ).

      lo_cut->get_config( ).

      DATA(lv_ok) = lo_cut->check_title( ).

      cl_aunit_assert=>assert_equals( act = lv_ok exp = abap_false ).

 

Summary

So currently, this feels like a neat solution given the constraints of a single dev/prod system. I do however wonder if the code would be better if it existed in AWS as part of a serverless implementation using API Gateway, Lambda and DynamoDB. It would certainly be a nice exercise to try anyway!

Next up though is the inclusion of the Service Now validation logic to check the reference numbers in the 3rd segment. Trials and tribulations of OAUTH2…

 

Assigned tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.