Lessons learned from a SAP Work Manager implementation
As we are working towards the go-live of a huge SAP Work Manager implementation, I would like to share some of the lessons that we as a team learned out of this implementation.
This was(and continues to be) a very challenging implementation and the SCN community has been extremely helpful through out, this is a small way I can give back to the community. Although I would have preferred to write a technical blog(I already have a couple of ideas and will write later), but I hope these LL’s will help all of you in your Work Mgr. implementations and client conversations.
Before I start, I wanted to cover the skillsets that were part of our team, you will pretty much need all of these in your implementations:
1) SMP administrator/Mobile architect – This is first on the list as this was my role(just kidding, these roles are not listed in any particular order). You will need an SMP administrator who will be responsible for sizing, installations, upgrades, application deployments etc.-> this could be the BASIS guy at the client side. In our case, we had amazing BASIS support from client. You also need a person responsible for the overall system architecture, troubleshooting system issues, network issues, evaluating work manager sync performance and pretty much everything that is part of your mobile landscape. Below is a high level overview of our mobile landscape.
MDM = mobile device management solution like Afaria, Mobile Iron, Air Watch etc.
2) Syclo – configuration/functional – This person is responsible for all the configuration at the Syclo level. Technically, an experienced EAM functional consultant can upskill in this area as well, but you are better off having a dedicated resource for this.
3) Syclo technical, agentry developer + ABAP developer – Responsible for all development work, this is your development team. We had 2 developers on our team – one was an Agentry developer who could understand ABAP concepts and the other was an experienced ABAP developer who also had Agentry development experience. Having developers who can understand concepts on both sides is a HUGE advantage. It is difficult to find resources in this area.
4) iOS developer/platform specific developer – We deployed our application on iOS devices, so initially there was some work to include GIS libraries in the client. We were able to fill this role with the help of an off shore iOS developer. Based on the complexity and scope of your implementation and the platform, you will need more resources. Currently, we are working on integrating GIS with SAP Work Manager, and had to bring on additional resources. Here is my blog on “How to integrate GIS libraries in SAP Work Manager” – Integrate GIS libraries in SAP Work Mgr. – iOS – part 1
5) MDM administrator – Person responsible for MDM concepts of your implementation. We ended up working a LOT with this person. In our case, this role was from the client side.
I will add more to this list if I think I forgot any, but now for here are the lessons learned. Please note the following:
1) Some of these might be more specific to system architecture. I am sure that other team members would like to add challenges from their particular area – I will discuss with them and have them add lessons learned from their side as well.
2) These are not listed in any particular order
3) Not all of these are Agentry specific, some of these might apply to other mobility and user experience implementations as well
Lesson learned 1 – Ensure that you have client support across each technical component in your mobile architecture
As you can tell from above architecture diagram, we had 6 servers on the MDM side(including load balancer), 2 application servers and 2 database nodes + 1 Cisco load balancer on SMP side, SAP(8 application servers, and database nodes configured in active-standby mode). In addition, we had other servers like Document Management system, GIS server and the list continues. It is important for you to establish that you will need support from all involved parties if you want your implementation to succeed. We struggled with this initially, but eventually we had buy in from all sides.
Lesson learned 2 – Evaluate how mature your client is in terms of enterprise mobility, do they have other apps that work on the same platform?
Don’t assume anything, just because they have an MDM solution and a device policy in place. For example, in our case, this was one of the first enterprise iOS applications that was being rolled out so we found out that the infrastructure was not in place for Macs to connect to the client’s intranet. This of course meant that we could not test iOS application code on the simulator. So initially, we had to make small changes, package the application and send the ipa to the MDM administrator. They would upload it to the enterprise app store, we would download it and test and go over this process all over again for additional changes.
Lesson learned 3 – Make sure expectations are set for performance related metrics like initial and delta sync performance etc.
Our client had a LOT of asset data in their system, just crazy amounts. Of course, this had an impact on how long initial syncs took, in our case we started off at around 50 mins to an hour for an initial sync -> this was unacceptable. We had to go through a lot of performance optimization exercises, there were some issues at the code level, some at the DB level and some were product issues that were causing performance degradation. In the end we got this down to about 30 mins average for an initial sync.
Lesson learned 4 – Conduct enough proof of concepts and shadow your end users as much as possible
In the initial phases of the project, we conducted 4 POCs and solicited feedback from end users both from an application AND device perspective. iPad was the preferred device. We also got very good insights as to how the end users will actually use the application as we followed them during the day to maintenance sites as they were going about their days. This was critical for us to empathize and understand some of the pain points they experienced in the later stages of the project.
Lesson learned 5 – Don’t just listen to anecdotal feedback about sync performance, pull sync statistics to have the right data
In the course of the project, we started dreading the term “sync issue”. Don’t get me wrong, we had a LOT of these sync issues initially, some of them were due to poor network, some due to product issues, and some were triggered due to room for backend performance optimization. Regardless of the cause, everything was clubbed under the umbrella of “sync issues”. We resolved a lot of these sync issues, but there was still a negative perception about the solution.To get around this, we started pulling Agentry logs daily(server, events, messages log) etc. and started identifying patterns in these logs -> I cannot tell you how many server and messages logs I had to analyze. But the hard work paid off, we now have an automated excel sheet(and also an ABAP program), that takes agentry logs as inputs and gives us sync statistics as output. This gave us some data to turn around the “perception” around sync issues. See example below:
Lesson learned 6 – Use your application as your users would and do not underestimate the role of network, something as trivial as this can influence solution adoption
This in some way ties into lesson learned 4. As consultants, we were used to hooking devices to our mi-fi devices and working off of those. During pilot go live we realized that some of the client sites had network speeds of about 1 Mbps/second. Trying to perform an initial sync with a backend system with a lot of asset data is like trying to watch a 4 hr Netflix movie on a dial up connection. Moreover, we were told that the end users would come to the yard in the morning and sync their devices to get that day’s work on the device and then sync back again from the yard when they are back. We found out the end users were initiating syncs from the maintenance sites, some of these were in the middle of no where with bad network. All of these things should be cleared out before implementation. We need to understand that Agentry framework works in an offline setting, but needs good network for the syncs to go through without too many issues
Lesson learned 7 – Do not underestimate the impact of a major iOS upgrade
With iOS 9 being released recently, Apple has changed some of the network libraries on their side. We found out that there were compatibility issues with SAP Work Manager and iOS 9, the result of this was that “per app VPN” ended up not working on iOS 9 devices. This of course meant that the application could not be used on iOS 9 devices. This issue is actually pretty recent and we are working with all the vendors to resolve this ASAP(this issue is specific to our implementation)
Lesson learned 8 – Check firewalls before troubleshooting anything
This is a pretty straight forward one, we ended up troubleshooting something for a week and later realized that the firewall exception request was not in place. Not fun.
Lesson learned 9 – Configure load balancer for ip stickiness
If you have a clustered SMP set up and use a load balancer in front of the SMP nodes, make sure the load balancer is set up for “ip stickiness”. You don’t want the load balancer to change nodes on the work manager client in the middle of the sync.
Lesson learned 10 – (SMP specific) Have early conversations around performance testing and disaster recovery plan
We actually got great support from SAP to execute our Agentry performance test. I had to re-validate server sizing iteratively as we got more accurate data around actually go live volume. Disaster recovery was a bit of a disaster. The client had no process to restore Linux machines from a tape back up and we had to resort to some really creative ideas to have a good disaster recovery plan in place. We used global site selector to configure this, there is a regular TCP health check probe and as soon as it detects that prod. VIP is down, it routes request to SMP DR servers. Of course, the disadvantage of this is that we have to maintain DR and prod. SMP servers manually and keep them in sync, which is additional effort.
As I mentioned, these are just some of the lessons that come to my mind. I will update this blog after getting inputs from my colleagues who were playing different roles on the team to get different perspectives.
I am also interested in hearing what lessons you guys learned during your implementations. Hopefully, this helps you address some of the challenges you might face in your projects and have the right conversations with your client.
My next project is a SAP Fiori implementation, so I am going to be off Agentry for a while, only to come back to it.