Centralised Transport Naming Standards
The first in a series of blog posts about Centralised Transport Naming Standards.
See also blog posts:
|Centralised Transport Naming Standards – SCP ABAP Environment Migration|
|Centralised Transport Naming Standards – Service Now Integration
The company I work for has a sizeable and complex SAP estate with dozens of production SAP systems and therefore an equally complex development environment. Although development standards have been in place for a number of years, it is always a challenge to ‘encourage’ developers to adhere to them.
In the last couple of years, we have implemented a Central ATC system which has made a huge difference in being able to effectively control the quality, performance and security etc. of the code being delivered to QA systems (thanks to Olga Dolinskaja and her excellent blog post series plus Bärbel Winkler for her experience in her Central ATC implementation blog post series. Their blogs and continued help is very much appreciated).
As well as development standards, the Transport Naming Standards are also important – especially for one particular business unit where they have developments moving to multiple QA systems and therefore knowing what goes where is paramount to ensure safe and reliable operations. Although reports had been built to provide statistics and details of transport adherence, this was all reactive and transports were routinely being released without ticket details and/or release schedule details. For audit purposes, ticket details are mandatory and for the Release Management team, the schedule details help to identify the scope for each release. The burden of correcting the transport name generally fell to a few build leads and due to the new DevOps structure and increased frequency of the releases, this was starting to consume up to 25% of their day. Not exactly an efficient use of time.
The main benefit therefore was to move this burden onto individual developers and therefore the ‘pain’ was then felt at the source. This then relieved the build leads of this bottle neck and time drain, and (as it turned out) enabled them to concentrate of building more automation as a result of this implementation. The benefits can be summarised as:
- Significant time saved to audit transports
- Easier mechanism to validate all transports for the weekly release are included
- Futher development based on change reference number now possible to generate the scope list automatically
Initially I considered whether it was appropriate to built a custom ATC check to house all the development in the Central ATC, but this was not the recommended approach (see this blog post on the subject) and therefore I set about building my own solution.
The solution came in two parts:
Provide links and information to all the development standards and transport naming standards details from within SE80. See another excellent blog post by Bärbel Winkler on this subject. This was to ensure that all the standards were in easy reach of all the developers, providing them will all the information required.
Transports were to be blocked at both creation and release via the CTS_REQUEST_CHECK BAdI to force the correct format – which is:
- <Programme/Region> = EPB, MOW, MDG, FSCM etc.
- <Project/Release/Sprint> = R5, 6, QR1, OOR, WR, QR01, ELS etc.
- <Change_Ref> = Service Now, Azure DevOps, SolMan or HPALM Ref Number
- <Description> = Self-explanatory
- <Delta No> = For subsequent transports for the same change ref 1, 2, 3, 4 etc.
A single remote enabled function module within the Central ATC system was required. This was just a wrapper for a local class that was split into 3 sections. See below:
DATA: lv_ok TYPE abap_bool, lo_transport_checks TYPE REF TO lcl_transport_checks. " Use the local class CREATE OBJECT lo_transport_checks exporting request = request type = type sysid = sysid text = text owner = owner. IF lo_transport_checks IS BOUND. * Read the config, mapped from the system id lv_ok = lo_transport_checks->get_config( ). IF lv_ok = abap_false. log = lo_transport_checks->get_log( ). lo_transport_checks->update_statistics( 001 ). RETURN. ENDIF. * Check the prefix against the config lv_ok = lo_transport_checks->check_prefix( ). IF lv_ok = abap_false. lo_transport_checks->log_valid_prefixes( ). log = lo_transport_checks->get_log( ). lo_transport_checks->update_statistics( 002 ). RETURN. ENDIF. * Check the transport title using regular expressions lv_ok = lo_transport_checks->check_title( ). IF lv_ok = abap_false. lo_transport_checks->log_follow_structure( ). log = lo_transport_checks->get_log( ). lo_transport_checks->update_statistics( 003 ). RETURN. ENDIF. ELSE. log = lo_transport_checks->log_binding_error( ). ENDIF. * Also log the success lo_transport_checks->update_statistics( 000 ).
The first section deals with ensuring the correct configuration was in place. Configuration tables are necessary to firstly ensure the linkage between the production systems to the respective development systems is in place.
* Read the config, mapped from the system id lv_ok = lo_transport_checks->get_config( ). IF lv_ok = abap_false. log = lo_transport_checks->get_log( ). lo_transport_checks->update_statistics( 001 ). RETURN. ENDIF.
Secondly, each production system can then have its own set of prefixes aligned to either a single or multiple development system scenario (in the case of dual track or more landscapes). If a valid prefix is not entered, the full set of valid prefixes is then sent back to the user, advising them of what they must do to proceed.
* Check the prefix against the config lv_ok = lo_transport_checks->check_prefix( ). IF lv_ok = abap_false. lo_transport_checks->log_valid_prefixes( ). log = lo_transport_checks->get_log( ). lo_transport_checks->update_statistics( 002 ). RETURN. ENDIF.
Lastly, the rest of the title is then checked using regular expressions for conformity to the correct structure.
* Check the transport title using regular expressions lv_ok = lo_transport_checks->check_title( ). IF lv_ok = abap_false. lo_transport_checks->log_follow_structure( ). log = lo_transport_checks->get_log( ). lo_transport_checks->update_statistics( 003 ). RETURN. ENDIF. ELSE. log = lo_transport_checks->log_binding_error( ). ENDIF.
At each section, any errors were logged and then sent back to the user in the satellite system via the local POPUP_WITH_TABLE_DISPLAY function module and a statistics table is updated for later use.
On the satellite system side, the CTS_REQUEST_CHECK BAdI has a simple stub of code as below, again with a local class to house the call to the Central ATC system.
METHOD if_ex_cts_request_check~check_before_creation. DATA: lv_result TYPE sysubrc, lt_log TYPE tchar255, lv_startpos_row TYPE sytabix, lv_startpos_col TYPE sytabix, lv_endpos_row TYPE sytabix, lv_endpos_col TYPE sytabix, lv_width TYPE sytabix, lv_line TYPE sytabix. FIELD-SYMBOLS: <lv_log> TYPE char255. me->remote_naming_check( EXPORTING type = type owner = sy-uname IMPORTING log = lt_log CHANGING text = text ). * Initial top left corner of dialog box lv_startpos_row = 5. lv_startpos_col = 28. IF lt_log IS NOT INITIAL. * Set the necessary bottom right corner lv_endpos_row = lines( lt_log ) + 3. LOOP AT lt_log ASSIGNING <lv_log>. lv_width = strlen( <lv_log> ). IF lv_width > lv_endpos_col. lv_endpos_col = lv_width. ENDIF. ENDLOOP. lv_endpos_col = lv_endpos_col + lv_startpos_col. CALL FUNCTION 'POPUP_WITH_TABLE_DISPLAY' EXPORTING endpos_col = lv_endpos_col endpos_row = lv_endpos_row startpos_col = lv_startpos_col startpos_row = lv_startpos_row titletext = 'Error Information' TABLES valuetab = lt_log EXCEPTIONS break_off = 1 OTHERS = 2. RAISE cancel. ENDIF. ENDMETHOD.
It should also be noted that since the main logic is housed centrally and within a development system, it is always ‘live’ and therefore any changes made must be done with the greatest of care. For that reason, ABAP Unit has been employed to ensure that a test harness is in place to provide assurance that when adding new functionality, the old functionality is not broken.
After being live for several weeks, further improvements were identified and have been implemented:
- Delta numbers were made mandatory and hence a colon number check was required
- Refence number type check (allowed Service Now or ADO document types)
- System specific additional requirements
Regardng point 3 above, catering for multiple production system requirements is not easy and although these are central standards, some systems require further checks for their own unique circumstances. In order to cater for these while keeping the core standards clean and not introducing a series of system specific IF’s or CASE statements into the central code, the multiple use functionality of the CTS_REQUEST_CHECK BAdI was taken advantage of, as well as ensuring the correct calling sequence i.e. central standards followed by local standards via the sort order option of the BAdI.
To illustrate this, the logical architecture is shown below
With the sequence of events as follows:
So far we have, five systems (2 landscapes) attached with another 4 systems imminent. As more systems come on board, feedback is positive and improvements suggested.
Any suggestions that are believed to be ‘good for all’ will be built into the core logic in the Central ATC system which why we were keen to centralise this in the first place. System specific developments that ‘don’t make the grade’ will be developed and supported by the respectrive system teams.
Current improvements on the backlog are:
- Provide a better UX than the POPUP_WITH_TABLE_DISPLAY FM. It is simple and functional but a better formatted HTML based UI would be more favourable
- API calls to both Service Now and Azure DevOps to validate the change reference number added to the title
- Configuration tables to be accessed via transactional CDS views/Fiori Elements rather than SM30 (nice to have but good excuse to develop experience in this area).
Also, I will be exploring whether it makes sense to move the functionality from the Central ATC to the SCP ABAP Environment (aka Steampunk). I’ll provide a blog on the experience when I get round to it.
Naturally I’d be very interested to hear if anyone else has implemeted anything similar and has suggestions for improvement. Equally, I’d be interested if you just find the blog post useful 🙂