Skip to Content

Retiring applications that came from an acquisition is a pretty simple process: lock down, archive if necessary, wipe if necessary, and decommission. It becomes a daunting task when you start to consider tens or hundreds of applications and hundreds or thousands of servers, compliance and regulatory requirements, and the need to retain historical information. What daunting task cannot be easily executed with a good standard process? See M&A Application Retirement – Part 1 – The Case for Retirement of this blog for more on the case for retirement and M&A Application Retirement – Part 2 – Inventory Time on the importance of a good inventory.

Early on in our retirement project the team identified the need for a standard approach. As I mentioned in M&A Application Retirement – Part 2 – Inventory Time, in the retrospect we started a little later than we should have. The first step was to bring everyone up to speed on the steps in that last phase of an application’s lifecycle: its retirement.

image

 

The all important lockdown

The lockdown phase of retirement is intended to switch the application from a read/write mode to read-only. Depending on the application technology this could be as simple as a configuration setting or as complex as running a number of database scripts. Our applications tended to fall into two categories when it came to the need for lockdown: financially relevant and everything else. For the first category timing was critical. We had to be able to demonstrate to auditors that we ceased to do business on the old applications at the same time we cut over to the new systems.

Detailed plans were created for these critical applications along with lockdown tests run in advance of the cutover day. For core financial applications there was a two phase lockdown. The first switched most users to read-only on cutover day, but allowed very select individuals in finance to make controlled changes to the system in order to close off the last period’s business. The second phase finished the lockdown for all users.

For critical applications, precise dates and times of the lockdown along with evidence of the switch to read-only were captured. This information would be used later as input to future retirement phases as part of a complete auditable trail for the retirement process. Early on we underestimated what it would take to have a complete and unambiguous trail from the live application to the archive. Better end-to-end planning and testing would have helped in this area.

image

The benefit of a comprehensive lockdown should be evaluated on the basis of cost and risk to the organization. The vast majority of applications required nearly no effort; in a few cases intensive effort was necessary. We had some applications in our inventory that were a legacy from past acquisitions, but no documented rigorous lockdown had been performed. We determined the need to correct the past on a case-by-case basis. Many of these applications had been sitting idle for several years. We experimented with locking down one of these legacy applications, but the cost of the forensic review easily outweighed the value (risk) to the organization.

 

Archives are a whole new project

I noted in part 1 that the IT organization is the customer for the retirement project. However, when an application’s retirement will require an archive, you will have a business customer just as you did for the original application. Effectively the archive is replacing the application the business users once had. This time the application probably has a fixed termination date as it is really the last phase of the original application’s lifecycle.

Generally every application will get a tape archive. This was a relatively straight forward task for us as there were operational teams in place that made tapes on a regular basis. Each tape was given a destruction date according to the data retention requirements. We used tape technology that has an anticipated life of 30 years, but, with one exception, the destruction dates were within 10 years.

If there was a certainty or near certainty that the data would be needed, additional archives would be created. These archives could be as simple as PDF documents or Excel worksheets or as complicated as a fully online archive with reporting facilities.

 

Online archives

It is the last category of archive, online, that gave us the greatest challenge. We used a tool to create the archives from the database. Once it was properly setup it was a very easy task to make an archive from any database. The archive was very compact and in a format that we could expect to be able to interpret 30 years from now. This was important as certain HR data had a retention period of 30 years. Since the one tool can support many archives we expect the costs to maintain these archives to be fairly low relative to the cost to continue running the legacy applications.

The biggest challenge with the online archive is data access. The original applications presented data to users via screens, reports, and business intelligence tools. If an auditor asks to see an invoice from the archived finance data, it must have the same information and likely the same format as the original invoice. With the old application shutdown, the only way to get this information is to replicate this functionality in the archive system.

The project team must weigh the benefits of keeping the old application alive for an extended period of time verse building a whole new set of reports. Keeping the old application alive has support ramifications as knowledge leaves the company, there may be on going costs to the application vendor to keep the application running, it may be technically difficult for users to access the application in the new company network environment, and the users that will need the data over time are unlikely to be familiar with the original application. There needs to be a trade off of the number of reports that need to be built and the length of time the old application is kept alive.

In our project, our largest online archive was created for a finance application. A hybrid approach was used: we extended the original application license to give us perpetual read-only access to the original application, we archived thousands of tables, and we created over 100 reports accessing data in over 100 key tables, and printed to PDF thousands of invoices for some of the smaller countries to avoid creating even more reports.

 

Defining the retirement roadmap

image

Chances are pretty good that you will not have unlimited bandwidth for application retirement, so it will be necessary to prioritize the order of applications to retire and prepare a roadmap highlighting the various stages.

Most of these phases can or must overlap. While it is critical that building the master inventory must be started early, decommissioning and data destruction can occur over a protracted period with little impact to the project. With the exception of this last phase of the process, legacy application experts will be necessary to capture the information, design the technical plans, and create the archives for the old applications. These experts are at risk to change roles or leave the company as a result of the acquisition, so the more quickly these phases can take place, the better.

We used a three stage prioritization starting with the user requirements for keeping the applications alive. We started by setting a drop dead date when all applications must be retired. This helped us with the inevitable “we don’t know how long we will need the data” problem. This was a common response from the business users and for some applications extended discussions and negotiation were required. There were also cases where the application would not retire, but rather become part of the acquiring company’s system landscape.

image

The second dimension for prioritization was complexity. There were two variables that were considered: functional complexity and risk complexity. Functional was fairly simple to gauge as it was generally fairly clear how large the application was, how much data had been generated by it, and how important it had been to running the acquired company. Risk was a little more difficult and factors like the chance of losing critical business or technical expertise were considered. The idea was to balance both variables over the course of the project. We knew that if we tried too many complex applications at the same time we would fail to make progress.

The third and final step was to consider the availability of expert technical and business resources. Often times the same resource was needed for more than one high priority application. These constraints could effectively move higher priority applications down and lower priority applications up. Again it was about finding the correct balance.

Prioritization was performed on a regular cycle. We had a 90 day reporting window, so we broke our project into 90 day chunks. We had very good information for the next 90 or 180 days and less accurate further out. Immediately prior to each 90 day period the prioritization would be re-evaluated giving the action plan for the quarter.

 

In scope – Out of scope

The number and disposition of applications seemed to change constantly on this project as the depth of our analysis increased. Early on in the process this was not a large problem as there were many quarters to execute and the additions and deletions tended to average out.

As we got closer to the end of the project it became necessary to take a more critical look at some of the items in the inventory. It became clear that for various reasons that some applications would not be retired before the normal project closing. The lifecycle of these applications fell into three categories: dependant on another application, a business need for the data continued without a good ROI to create the required archive, and the business retained an active need for the application and no replacement was forthcoming.

For each application that fell into this category we needed to find a sustaining organization that would agree to take responsibility for the application lifecycle. In virtually every case there was a de facto group that was responsible; some in IT and some in the business. Occasionally some discussions and negotiations were necessary to finalize ownership, but eventually this was completed for all. In each case the project team performed all the planning steps necessary for retirement. For applications that were not continuing indefinitely, it was up to the sustaining organization to execute those plans at the predetermined date.

There was a simple solution for the first set of applications due to how we positioned the last part of retirement in the project plan. The final decommissioning steps for an application were already part of standard operational processes inside of IT. To decommission an application, a service request would be entered into a ticket and the step would be executed. We never knew exactly when the ticket would be completed, but we could rely on the ticket processing process and standard SLAs in our completion reporting to stakeholders. For the dependant applications, the sustaining organization only needed to enter the ticket on the correct date.

The second case was much less common. It required some archiving steps prior to the decommissioning ticket being entered. In every case these steps were well defined by the retirement project team and turned over to the sustaining team. The sustaining team would create the archive according to the plans provided at the appropriate time.

The final case was generally a matter of formal acceptance of the continued lifecycle of the application by the retirement project sponsors. The impact to the project was a small reduction in the expected benefit of retirement (see part 1) in the overall ROI. In no case was this reduction significant enough to change the original decision to execute an overall application retirement.

It should be noted that the number of applications that fell into the last two cases could have been smaller if we had started the retirement project much earlier in the acquisition. In particular the need to continue applications would have been captured during the integration gap analysis. However, given that we were addressing well over 200 applications, some small need for these cases probably would have still existed.

 

Conclusions

We were fortunate that the acquisition integration planners had called out a specific need for retiring applications. Although we did not have the experience or knowledge to effectively execute a retirement of this scope before we started we did have a mandate. By developing the standard methods and processed in the early phases we substantially simplified and increased the quality of the project as it progressed.

In all we developed four significant artifacts: a single master inventory, a prioritization questionnaire and a retirement requirements document for each application, and transfer agreements with sustaining organizations for those applications that would not retire before the end of the project.

 

M&A Application Retirement – Part 1 – The Case for Retirement

M&A Application Retirement – Part 2 – Inventory Time

To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Marilyn Pratt
    Enjoying your series Russ and so refreshing to see an SAP IT Architect sharing some “behind the scenes” looks at how you approached a retirement of a scope you and your colleagues had never experienced before.  Thanks for such candor and insight.
    Also LOVED the Xcelcius dashboards in your previous blog  M&A Application Retirement – Part 2 – Inventory Time .  Why oh Why didn’t that garner comments?  Good stuff and strange that others aren’t chiming in and sharing.
    Wonder if you have thoughts of how to create Sustainable Applications.  Or more simply how to create applications that when tje need arises to “retire” can be sent to the App graveyard without as much pain and bother as your team obviously experienced.  Words of caution or wisdom on that count?
    In M&A Application Retirement – Part 1 – The Case for Retirement of your blog series you spoke of “costs include the is a carbon footprint associated with running the equipment, power costs, under utilized data center space, staff costs, and many issues retaining skilled staff” – surely there is a sustainable retirement strategy that could be thought of at inception.  Are there folks doing that kind of IT architecting?
    (0) 
    1. Russ Beinder Post author
      Thanks for looking Marilyn.

      Perhaps people were not prepared to invest time into a three part series until they could be sure all three parts exist as they do now. ƒº Certainly the views of part 2 with the dashboard have now jumped a bit. Even if no one was interested the content (my wife said it was a boring topic), I was expecting some ¡§how to¡¨ questions. I learned a lot about Xcelsius building and maintaining that dashboard. Maybe everyone else already has that figured out.

      As far as simplifying retirement, I think we came a long way in understanding how to do things right. In the future I am sure there will be much less pain in all aspects of retiring applications that come from M & A. With the announcement yesterday (Sybase), I hope very much that someone will have an opportunity to build on what we learned in the near future.

      As for what to do outside of the context of M & A, that is a different animal. Last year a colleague and I prepared a strategy paper on the application lifecycle in IT, so I have had an opportunity to consider this problem. At the core of the problem there were two fundamental elements needed for success: you must believe there is a lifecycle that should be managed and you must have an organization and processes structured to manage it. This is a relatively foreign or new concept to most IT organizations.

      I am not an expert on sustainability and carbon footprint in IT is not something we addressed directly when looking at the application lifecycle. My suspicion is that there are many aspects to this question depending what layer of the IT stack you look at. For instance, if you choose to position a data center in a region that relies on coal fired power you are likely to have a higher actual carbon footprint than the same data center positioned in a region with hydro generation. My assumption is that if you operate the minimum necessary application footprint for your business you will have done all you can to minimize the carbon footprint from the aspect of application lifecycle.

      Burdened by an overwhelming demand from their customers to provide new capabilities far beyond their capacity, IT organizations tend to get stuck in an environment where find themselves struggling to deliver a subset of what is requested. They have little time to innovate in the area of their internal processes. They further complicate their situation by producing an ever increasing technical and functional debt as they ignore ¡§secondary¡¨ issues such as application lifecycle. This puts further pressure on demand as systems eventually collapse under their own weight and need a wholesale replacement or major renovation as a result.

      M & A retirement is easy. Everything must get shutdown or migrated in a safe and effective manner as quickly as possible. It is easy to see a direct effect on carbon when you unplug a server from the wall, but consider the problem of ERP. What does the pro forma business plan look like for that application lifecycle? It seems to simply live forever. As a best run business we simply upgrade to the latest version or enhancement pack of our own ERP product. Why should we worry about its lifecycle?

      The reality is that there are potentially hundreds and thousands of use cases we enable in our ERP system. As the business changes, as the world around us changes, so do those use cases. The traditional IT approach enables ever increasing functionality as new use cases are requested, but rarely or ever ¡§turns off¡¨ the old ones until it is time for the next wholesale replacement. As a lean management champion I know that when you do things that do not add value you are creating waste. No doubt this waste takes many forms including those with carbon impacts.

      This is certainly a non-trivial problem that cannot be solved overnight in a large organization. I am hopeful that work that is going on now to improve processes will bear application lifecycle management fruit over the next several years.

      (0) 
  2. Timo Stelzer
    Congratualtion Russ … that’s a very good article to bring transparency about application retirement.

    Please let me highlight the Green IT aspect of this. Especially these old systems which are running for audit purposes anymore running on very old hardware. You can imagine that this equipment does NOT have power saving capabilities on board. Hence retirement of these systems will help to reduce the the energy consumption and with this the electricity bill as well. Another very important aspect is that you will create free space in the data center when you are disposing the old hardware. You see many Green IT aspects around this.

    That’s the reason why we have mapped ILM to SAP’s Green IT Solution map.

    (0) 

Leave a Reply