Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

Retiring applications that came from an acquisition is a pretty simple process: lock down, archive if necessary, wipe if necessary, and decommission. It becomes a daunting task when you start to consider tens or hundreds of applications and hundreds or thousands of servers, compliance and regulatory requirements, and the need to retain historical information. What daunting task cannot be easily executed with a good standard process? See M&A Application Retirement - Part 1 - The Case for Retirement of this blog for more on the case for retirement and M&A Application Retirement - Part 2 - Inventory Time on the importance of a good inventory.

Early on in our retirement project the team identified the need for a standard approach. As I mentioned in M&A Application Retirement - Part 2 - Inventory Time, in the retrospect we started a little later than we should have. The first step was to bring everyone up to speed on the steps in that last phase of an application’s lifecycle: its retirement.

 

The all important lockdown

The lockdown phase of retirement is intended to switch the application from a read/write mode to read-only. Depending on the application technology this could be as simple as a configuration setting or as complex as running a number of database scripts. Our applications tended to fall into two categories when it came to the need for lockdown: financially relevant and everything else. For the first category timing was critical. We had to be able to demonstrate to auditors that we ceased to do business on the old applications at the same time we cut over to the new systems.

Detailed plans were created for these critical applications along with lockdown tests run in advance of the cutover day. For core financial applications there was a two phase lockdown. The first switched most users to read-only on cutover day, but allowed very select individuals in finance to make controlled changes to the system in order to close off the last period’s business. The second phase finished the lockdown for all users.

For critical applications, precise dates and times of the lockdown along with evidence of the switch to read-only were captured. This information would be used later as input to future retirement phases as part of a complete auditable trail for the retirement process. Early on we underestimated what it would take to have a complete and unambiguous trail from the live application to the archive. Better end-to-end planning and testing would have helped in this area.

The benefit of a comprehensive lockdown should be evaluated on the basis of cost and risk to the organization. The vast majority of applications required nearly no effort; in a few cases intensive effort was necessary. We had some applications in our inventory that were a legacy from past acquisitions, but no documented rigorous lockdown had been performed. We determined the need to correct the past on a case-by-case basis. Many of these applications had been sitting idle for several years. We experimented with locking down one of these legacy applications, but the cost of the forensic review easily outweighed the value (risk) to the organization.

 

Archives are a whole new project

I noted in part 1 that the IT organization is the customer for the retirement project. However, when an application’s retirement will require an archive, you will have a business customer just as you did for the original application. Effectively the archive is replacing the application the business users once had. This time the application probably has a fixed termination date as it is really the last phase of the original application’s lifecycle.

Generally every application will get a tape archive. This was a relatively straight forward task for us as there were operational teams in place that made tapes on a regular basis. Each tape was given a destruction date according to the data retention requirements. We used tape technology that has an anticipated life of 30 years, but, with one exception, the destruction dates were within 10 years.

If there was a certainty or near certainty that the data would be needed, additional archives would be created. These archives could be as simple as PDF documents or Excel worksheets or as complicated as a fully online archive with reporting facilities.

 

Online archives

It is the last category of archive, online, that gave us the greatest challenge. We used a tool to create the archives from the database. Once it was properly setup it was a very easy task to make an archive from any database. The archive was very compact and in a format that we could expect to be able to interpret 30 years from now. This was important as certain HR data had a retention period of 30 years. Since the one tool can support many archives we expect the costs to maintain these archives to be fairly low relative to the cost to continue running the legacy applications.

The biggest challenge with the online archive is data access. The original applications presented data to users via screens, reports, and business intelligence tools. If an auditor asks to see an invoice from the archived finance data, it must have the same information and likely the same format as the original invoice. With the old application shutdown, the only way to get this information is to replicate this functionality in the archive system.

The project team must weigh the benefits of keeping the old application alive for an extended period of time verse building a whole new set of reports. Keeping the old application alive has support ramifications as knowledge leaves the company, there may be on going costs to the application vendor to keep the application running, it may be technically difficult for users to access the application in the new company network environment, and the users that will need the data over time are unlikely to be familiar with the original application. There needs to be a trade off of the number of reports that need to be built and the length of time the old application is kept alive.

In our project, our largest online archive was created for a finance application. A hybrid approach was used: we extended the original application license to give us perpetual read-only access to the original application, we archived thousands of tables, and we created over 100 reports accessing data in over 100 key tables, and printed to PDF thousands of invoices for some of the smaller countries to avoid creating even more reports.

 

Defining the retirement roadmap

Chances are pretty good that you will not have unlimited bandwidth for application retirement, so it will be necessary to prioritize the order of applications to retire and prepare a roadmap highlighting the various stages.

Most of these phases can or must overlap. While it is critical that building the master inventory must be started early, decommissioning and data destruction can occur over a protracted period with little impact to the project. With the exception of this last phase of the process, legacy application experts will be necessary to capture the information, design the technical plans, and create the archives for the old applications. These experts are at risk to change roles or leave the company as a result of the acquisition, so the more quickly these phases can take place, the better.

We used a three stage prioritization starting with the user requirements for keeping the applications alive. We started by setting a drop dead date when all applications must be retired. This helped us with the inevitable “we don’t know how long we will need the data” problem. This was a common response from the business users and for some applications extended discussions and negotiation were required. There were also cases where the application would not retire, but rather become part of the acquiring company’s system landscape.

The second dimension for prioritization was complexity. There were two variables that were considered: functional complexity and risk complexity. Functional was fairly simple to gauge as it was generally fairly clear how large the application was, how much data had been generated by it, and how important it had been to running the acquired company. Risk was a little more difficult and factors like the chance of losing critical business or technical expertise were considered. The idea was to balance both variables over the course of the project. We knew that if we tried too many complex applications at the same time we would fail to make progress.

The third and final step was to consider the availability of expert technical and business resources. Often times the same resource was needed for more than one high priority application. These constraints could effectively move higher priority applications down and lower priority applications up. Again it was about finding the correct balance.

Prioritization was performed on a regular cycle. We had a 90 day reporting window, so we broke our project into 90 day chunks. We had very good information for the next 90 or 180 days and less accurate further out. Immediately prior to each 90 day period the prioritization would be re-evaluated giving the action plan for the quarter.

 

In scope - Out of scope

The number and disposition of applications seemed to change constantly on this project as the depth of our analysis increased. Early on in the process this was not a large problem as there were many quarters to execute and the additions and deletions tended to average out.

As we got closer to the end of the project it became necessary to take a more critical look at some of the items in the inventory. It became clear that for various reasons that some applications would not be retired before the normal project closing. The lifecycle of these applications fell into three categories: dependant on another application, a business need for the data continued without a good ROI to create the required archive, and the business retained an active need for the application and no replacement was forthcoming.

For each application that fell into this category we needed to find a sustaining organization that would agree to take responsibility for the application lifecycle. In virtually every case there was a de facto group that was responsible; some in IT and some in the business. Occasionally some discussions and negotiations were necessary to finalize ownership, but eventually this was completed for all. In each case the project team performed all the planning steps necessary for retirement. For applications that were not continuing indefinitely, it was up to the sustaining organization to execute those plans at the predetermined date.

There was a simple solution for the first set of applications due to how we positioned the last part of retirement in the project plan. The final decommissioning steps for an application were already part of standard operational processes inside of IT. To decommission an application, a service request would be entered into a ticket and the step would be executed. We never knew exactly when the ticket would be completed, but we could rely on the ticket processing process and standard SLAs in our completion reporting to stakeholders. For the dependant applications, the sustaining organization only needed to enter the ticket on the correct date.

The second case was much less common. It required some archiving steps prior to the decommissioning ticket being entered. In every case these steps were well defined by the retirement project team and turned over to the sustaining team. The sustaining team would create the archive according to the plans provided at the appropriate time.

The final case was generally a matter of formal acceptance of the continued lifecycle of the application by the retirement project sponsors. The impact to the project was a small reduction in the expected benefit of retirement (see part 1) in the overall ROI. In no case was this reduction significant enough to change the original decision to execute an overall application retirement.

It should be noted that the number of applications that fell into the last two cases could have been smaller if we had started the retirement project much earlier in the acquisition. In particular the need to continue applications would have been captured during the integration gap analysis. However, given that we were addressing well over 200 applications, some small need for these cases probably would have still existed.

 

Conclusions

We were fortunate that the acquisition integration planners had called out a specific need for retiring applications. Although we did not have the experience or knowledge to effectively execute a retirement of this scope before we started we did have a mandate. By developing the standard methods and processed in the early phases we substantially simplified and increased the quality of the project as it progressed.

In all we developed four significant artifacts: a single master inventory, a prioritization questionnaire and a retirement requirements document for each application, and transfer agreements with sustaining organizations for those applications that would not retire before the end of the project.

 

M&A Application Retirement - Part 1 - The Case for Retirement

M&A Application Retirement - Part 2 - Inventory Time

4 Comments