Transport strategies in SAP IdM 8.0
Depending on your idm landscape you might have several instances of SAP IdM in your company;
- DEV: Development Server
- QM: Test Server
- PROD: Productive Server
Transporting artifacts from one server to the another was way easier back in SAP IdM 7.2. You selected the changes in the source system and exported them into mcc files, then imported them in the target machine.
In SAP IdM 8.0 you can’t export single changes, but only whole packages. Therefore you will face a few difficulties, you did not face before. Here I want to show some strategies how to handle the new challenges coming with the package-concept of SAP IdM 8.0.
We assume that in our simple scenario, we have a two step landscape consisting of a DEV and a PROD system. Additionally we develop new processes and functionalities for our IdM system on our DEV-Server, test them out and when we think that its production ready, we want to transport them to PROD.
There are many different strategies how you want to transport changes between systems. I will show you two of the most used ones (by me) and give you some pros/cons.
Bulk package transport
In this strategy you only transport complete packages from DEV to PROD. When you think, you finished your work, you export your package and import it to PROD, overwriting all the existing logic with your new one.
- Always synchronized systems
- Easy transportation
- No need for extensive changetracking
- Small chance to miss functionalities
- Easy source code versioning
- Realistic scenario in a two-step landscape
- Needs a detailed package structure (you dont want to have a big package, but many small ones)
- A cylcic development/deployment phase is necessary
- You will need to finish all your development on a package before transporting
- No way to transport only part of your developed code
- Too complicated for “quick fixes”
- No way to check package differences
- Overwrites everything, can lead to unwanted changes
In contrary to the bulk package transport, you might also mimic transport units by creating a transport package. After your development, you move all your jobs/tasks/scripts to this package, transport it to PROD, move them back into the desired packages and then delete the transport package in both systems.
- Easy way to transport small changes
- small bugfixes
- Better control of the transportation process
- you can transport part of your developed code
- no automatic/uncontrolled overwrites in PROD
- Good solution when you have big packages (like directly after the upgrade from 7.2 to 8.0)
- Your packages between DEV and PROD will get unsynchronized over time
- the bigger the differences in the system, the more unreliable are the tests you will do
- Needs a very detailed changetracking
- You do not want to miss dependencies
- Needs excessive testing
- three step landscape is necessary here!
It is hard to give you the _best_ strategy, because depending on your situation/your conditions a specific strategy might fit better/worse for your system.
In general you should always prefer the bulk package transport, it’s the only way to keep your systems clean and synchronized. It might not sound like a big problem, but unsynced systems will have a big impact for your work; the more differences there are between two systems, the more unreliable your tests/your further development will get.
In both strategies, a good documentation / changetracking is very essential and will decide how good your transportation will work out for your company.
How do you transport? Do you have a different approach? Let’s discuss!
as no one comments, I just throw in my experience (spoiler alert: horror story for all IdM developers incoming).
Long story short
I won't ever use the package transport method, except maybe for completely new packages (which probably will never happen).
After the going live of 8.0 in May I've more or less worked in the prod only. Getting rid of smaller stuff like uProvision of wrong tasks and such. Data imports are done in prod only nevertheless, and they make up a big amount of my time these days.
Then I wanted to get back to my two system landscape in mid of June. So I tried exporting all the packages. And I failed miserably. I cannot recall how it happened exactly, but when I shot down the hanging Eclipse, 3386 tasks got deleted. This included the main task package with the versions of two times six request workflows, hundreds of mail and database writing tasks and so much more. No, separating them wasn't a good idea during the migration... Maybe I should've waited longer, not only like half an hour?!
Yes, they got deleted. Never experienced THAT with the MMC, some other stuff yes, but task deletion due to a messed up export, never. Just sat there like "No, no, that didn't happen...". Took anything down, UI and dispatchers. Then opened a very high ticket at the SAP and let my DB admins backup the failed DB for later research. In parallel I extracted all changed data, mostly MX_PERSON and the KNF_REQUEST as well as the table with our Non-REST-API forms (hooray to them, saved a big load of work with "just" copy and pasting them back in!).
As the SAP response didn't came in and the first customers complained where their request mails are, I decided to restore the DB update from the day before, 7PM. Then threw in the changes and let the requests re-run. That took like half an hour and the system was like 99% restored.
As a consequence I didn't dare to risk this again and for like over a month I haven't checked in the task package. Nor did I try to export anything. Finally, I decided to check it in and for like a minute I feared it would happen again. Yet, anything works now as "designed".
I actually plan to have the dev built from the prod up again, by copying the complete DB to another server. The big mess we created in 2009 by copying the whole DB from dev to prod, is now my biggest trump card.
Transporting will be like in 7.1 or 7.2. Each single thing by its own. Or better said, it will be worse. In both version it was possible to copy at least jobs between the systems. I don't dare to do that now, if it even works. If I have to restructure workflows, well. I guess I simply stop the job that runs our request table, let my colleagues process the rest and then restructure. Worked like a charm in 7.2, so why should it fail now.
Transport-Fail me once, shame on SAP. Transport-Fail me twice, shame on me. 🙂
You were not kidding with your spoiler alert! Yeiks!
We're in the middle of upgrading to IDM 8, but I have a very detailed package structure (Aydin can attest to that). After your story, I am a bit wary about the transport scenario, but we will start doing this from the migration system to the dev system, so if it will implode, at least prod is safe for the moment.
Just let me put "backup migration db before transport!!!" on our checklist... you never know...
My plan (when this whole project is done) is to develop in the dev system and then transport the involved packages to prod. Hopefully I will not have a similar horror story to share later on. ^^
I always choose the first option and keep the dev and production in sync. As you said, keeping many smaller package than a large package is the key.
Using the second option is not ideal from my point of view. It should be fine if the system is maintained by one developer only. If someone else takes over he will find it hard to find the difference between the QA and prod, and will be scared to make any changes to production directly.
However it does have some issues with current transport mechanism. When checking out a package, it should provide a input box to enter the reason for the check out. It can be easily forgotten otherwise. you can allowed to enter comments only when you decide to check in, the moment you may forget the reason.
Another issue is you are not able to safely reverted to a previous version. you need to manually store every version of the package.
The last issue I can see is I am not able to change the schedule rule of repository jobs without pacakge check out. I have to checkout package, change the schedule, then revert the package. This looks stupid to me.
It is very simple to update the schedule job rules and it can be done in minutes.
Till now didnt faced any issue updating in the tables, but its at your own risk.