I believe this post will seem really strange solution for most of gurus here but there was the only solution i have found to solve the task.
Sometimes you have set of programs ( standard or client but that were written many years ago ), that are working well and don’t need to be redesigned to new class-based apporach. And your logic have the requirement to run these reports from update tasks.
Why do I have such strange requirement? Let’s say we have a report that regenerates someting with COMMIT WORK inside it. And I have an update process that should include also this regeneration finally. And this call should be done only after DB commit, and of course it shouldn’t be called in the case of errors. And yes – it takes a lot of time that’s why we call update FM in V2.
So initially we are here:
So if you try to submit the report directly t from the SET_STATUS_FM that has been run in update task you will definetely get the short dump. Some of my team mates were even sure there were no way to do this without getting the dump but we finally handled that. Calling report using via job is not also possible here.
After spending some time on researching I figured out that it’s possible to submit the report from transactional RFC. So I prepared some test program to prove that. It was looking like this:
Next step was assumed just to call this UPDATE_INDEX_RFC from SET_STATUS_FM but this is prohibited to call tRFC from update processes and you will also get a dump here. But the solution was found – and the name is bgRFC.
Frankly speaking I had never considered this technique providing by SAP before because always thinked it’s the same qRFC and tRFC but class-based designed. But that’s not like this. There is at least really powerful features that bgRFC provides – you can call it from update processes.
I also skip the bgRFC functionality description. All the related information you can find here:
So I configured bgRFC according to provided help. Also I created new inbound destination. Then I changed the test program with the code like this:
try. . data(lo_dest) = cl_bgrfc_destination_inbound=>create( 'ZUPDATE_INDEX' ). data(lo_unit) = lo_dest->create_qrfc_unit( ). lo_unit->add_queue_name_inbound( exporting queue_name = 'ZUPDATE_INDEX' ). CALL FUNCTION 'ZUPDATE_INDEX_BGRFC' IN BACKGROUND unit lo_unit. catch cx_bgrfc_invalid_destination. endtry.
And the sequence of calls became like this:
That worked well also and the last step was to combine all the calls in one chain:
You probably are thinking now I’m crazy and look at this as at piece of trash. But if it is, I would be grateful to receive an alternative way to do this =).
In anyway we achieved several goals at the same moment:
- we reused the report as we have it. No code refactoring inside.
- If update process fails bgRFC doesn’t start
- bgRFC starts only after commit statement
- bgRFC uses already updated database
There is at least one point I would pay attention to also:
- If bgRFC crashes the update is not being rolled back.
But the last point was OK for us so we didn’t care about that. It’s similar to scheduled report run as a job.
I hope this information could be helpful for some of you.