Three Days In Orlando, And All I Got Was T-Shirt Sizing?
Instead of my prior practice of doing a blog-a-day on the just-concluded SapphireNow/ASUG Annual Conference [2018 version], I’m piling all of my session reviews into one post. I did related posts each day, and thought I’d let the content percolate a while before typing. Don’t know if the instant gratification of getting it done is better than this approach.
The first session I attended was “ASUG/11103: Cargill’s Best-Practice Super-User Program: Why Leadership Matters“. As a technical lead, I have not been a super-user myself but have worked with my company’s associates that wear that hat for a long time. The first thing that struck me was the newness of their program. SAP has popped up in major corporations for over 20 years so starting up now is interesting. Whether there was a prior effort or this was a new SAP install I didn’t catch.
The key takeaways for me was (1) have a super user strategy/plan, don’t just let it grow organically. It wont. Have a full time leader (2) Reward/recognize the efforts, not just with an occasional small cash bonus/gift, but with tangible items such as company directory flags, annual performance report write-ups and such. (3) have a formal training program, with completion certificates. Don’t let “exam” failures reflect on the candidate, use the miss as an opportunity to improve the training.
Superusers can be “volun-told” as in directed to take on the responsibilities, but it’s better to use personality trait checks to see who is a good fit.
(SAP Jam as the gamification platform…)
The second session was Lockheed Martin’s “ASUG10575: Automating Data Collection from IoT Devices in Manufacturing“. I was not sure what to expect beyond what the title says. As a large, diverse corporation, the content could have ranged far and wide. The presenters were from the Space Systems business, meaning satellites and high tech gadget wizardry. The specific manufacturing focus was on equipment to be put on orbit, where as they said, there are no do-overs. Quality control in the past, with torque wrenches for instance, was paper-driven and rejects caused costly manual rework steps.
Using digital-metered torque wrenches (Iot devices of the title) allow for instantaneous recording of the work-at-hand (no pun intended). What happens after procuring such tools is the hard part. I’ll skip the gory details (their slides were awesomely done) and simplify the process as data uploads to a close-by system that gives the operator instant feedback as well as recording the task completion for audit purposes. Avoiding defect creation by constant data recording, with verified de-duplication and filtering of nonessential data is beneficial.
The only complaint I’d say is the session was done in 30 minutes. For the depth and breadth of the manufacturing controls, there was much more that could have been shown. One audience question was whether the data flows right to the ERP system or whether it’s cached locally. The answer seems to be local in the shop floor MII system. I’ll study the slides to know more.
[Components/Applications: Atlas Copco, GE Digital, Microsoft SQL Server, SAP Manufacturing Integration and Intelligence (MII), and SAP ERP.]
The third session was my own – “ASUG11741: From Spools to Enterprise Management with SAP Software: Lessons Learned at Stanley Black & Decker”. That write-up is here – A day, an Hour, a Speaking Challenge
Last of the day, after the relief of finishing my own session, was on the Landscape Transformation tool. ASUG10731 : SAP Landscape Transformation supporting IBM´s Data Migration Strategy
The “SLT” as we’ll call it, is designed to work around the limitations of standard SAP system unloads and reloads, which are time-consuming due to the architectural requirements around data integrity (basically the whole SAP commit stack (what I call the verbucher). By specifying data element characteristics, essentially mapping business processes at the database table layer, the tool can outperform parallel RFC connections by orders of magnitudes. While I’ve attended ASUG influence sessions on this tool, nothing like a real-world application to better understand the ins-and-outs.
One factor that can be leveraged by the SLT is to skip data records according to a defined pattern, in IBM’s case prior models that don’t need to exist in the target system as they aren’t sold or maybe not serviced. There are complex aspects of the data model that I didn’t quite follow related to the hierarchy of models, parts, configuration, etc.
SLT can pull from non-SAP systems also, but the target must be an SAP system. This feature allows for functional transformations as long as the source system data relationships are well understood. IBM is challenged, as a global manufacturer, to plan out system migrations that fit into regional business needs. Even though data copies are much faster than other methods, they aren’t instantaneous. Business down-times need to estimated, communicated, and executed with precision. They’ve migrated some business lines, and showed general plans for future updates.
First session was Bhanu Gupta about BW 7.5 move to HANA in the cloud – “ASUG12026 : Molex Welcomes SAP Business Warehouse 7.5 and SAP HANA: Now How to Make the Most of It“. She and a colleague (Pravin Gupta) spoke about technical aspects of the upgrade plus metrics around performance improvements (Yay, numbers!). Any system move of such magnitude has complex moving parts, such as creating an “old version” target on the new platform, moving data, running upgrade tests, and dealing with dual stack developments.
Their project went live just weeks ago; the predicted performance improvements were well documented and anticipated gains were realized. One of the most dramatic was an improvement in a drop and reload activation time from hours to seconds. Not all processes improvements were this extreme; under-promise and over-deliver is always a good plan.
As for improvements beyond the hardware/database layer changes, push-down of code from the application to the database layer is in the plans. More optimizations need to be realized by rework, or rip and replace process changes, some of which are simple enough, while others need careful study. Bhanu called this the “creative destruction” process. Steps built in BW 2.0 may still work in the latest edition. Improving the delivery from a time and feature view needs to include business requirements and how the new platform supports them, not just dropping redundant or superfluous layers.
Notes: 1808450, 1908367, 2264716, 2467627
Second session was not on my going-in plan, just picked because the one I had chosen at that time slot was on the show floor (long walk for a short session). “ASUG11618: Everything I Know About Software Quality Assurance I Learned From My Dogs.” Speaker Lori Schenck is from SAP, and while I’d like to hear customer stories first, Lori knows her topic, and who can resist a good dog tale? I arrived a bit late, and the initial few minutes about walking her dogs didn’t quite resonate. I’m sure I missed the key scene-setting so I stuck with it. Some of her analogies on dog behavior fit well to the testing concepts presented, but others I didn’t quite follow.
Takeaways – SAP themselves have quality gates in their products (she said 27), and the ones around security and data protection were eye-openers in the new era of GDPR. Do you trust your developers? Good question. You can assume honesty, but maybe having audits will protect you from bad surprises (like endings up on the front page of the Wall Street Journal). The Second Longest SAP Product Name (allegedly, and who could prove otherwise?)
(please make a note of it!)
Penultimate session on Wednesday – “ASUG11636 : Come As You Are: A True Story on Engagement Through SAP Enterprise Support with Crocs” I’ve worked with SAP’s Kristen Scheffler before (Head of Customer Engagements for SAP Service) but had not heard Jen Hankeof Crocs. Crocs went live recently with SAP, having had a custom built business applications suite for internet ordering. So unlike “mature” who have years (or decades) of experience to draw on (or be stuck with, depending), they had a clean slate, so to speak.
Performance aspects they’ve uncovered, with the help of SAP support, include data volume management, stock business purchase order records that don’t fit all order profiles, and possible concerns of system responsiveness during the Black Friday/Cyber Monday high retail order times.
Obvious easy wins on data volume are archiving and data prevention. More complex fixes need to result from careful study of SAP recommendations in their automated reports (150 pages at this point).
The session was lively with good audience interaction and clarifications/explanations of SAP’s myriad of offerings, from standard classroom training, to guided experts sessions, to expert chats. There are more than one kind of chats, as we found, so caveat emptor.
Takeaway – get an appointed SAP support contact, and have one on your own side that is the main point of contact.
Last session on Wednesday – “ASUG10387 : Real-Time Acquisition of Data with SAP Landscape Transformation Replication Server.” I was a bit apprehensive on this one, given the speakers were from Lenovo in China. They could have had a really great presentation, but when I saw what I’d consider a nervous state before the start, I wondered what would occur. My experience has been speakers who wander around and say hello to people are comfortable, experienced speakers, and those who don’t are less likely. Then there was some hassle with unnecessary badge scanning (attendees this year had an RFID tag on their badge and every room had tracking devices).
I will give great credit to Andrea Wang (Lenovo Basis Manager) for good communications under the circumstances. He had a receptive audience, which always helps.
The topic fundamentally differed from the use of SLT described above in that the IBM use is to move from an old platform to a new one, then decommissioning the source, while Lenovo is doing real-time pulls from a source system to use the data in a consolidated system, keeping the source intact. Kind of an extract/transform/load (ETL) on steroids.
The chart shown below doesn’t do the concept justice; my apologies.
Takeaways – set up a separate landscape for the SLT server. It can run in an existing ABAP landscape but won’t scale as well.
Only made it to one officially ASUG education session despite having many on my schedule. But it was maybe the highlight of the week. “ASUG10325 : SAP Landscape Management in the Cloud: Efficiency and ROI Through SAP Landscape Management and Microsoft Azure” Wait, another Landscape Transformation session? Nope, Landscape Management. Similar but not the same. The Landscape Management tool is the grandchild of the original (late but lamented, free tool) SAP Adaptive Controller, still visible in SAP Help pages (example). It’s a dashboard/tool set to manage large numbers of virtualized or cloud-hosted SAP systems.
Treasury Board Secretariat presented (I’ll just call them the Canadian Government for simplicity). They were witty, smart, and had a great story. Their plan was to put SAP in the cloud, and then because it looked more expensive than on premise, pull it back in.
Best pun of the week:
“Cost Meteorology” analogy comparing weather forecasters to ~cloud provisioning~. Best pun of the week!
#ASUG2018 Government of Canada.
Cost savings in cloud platforms can result from disciplined shut-down of unnecessary systems, such as the simple example of not spinning up a training system when not needed, or stopping development systems overnight and on weekends.
Simple takeaway: Solution Manager doesn’t like when systems go away.
Use the micro transaction statistics cloud providers collect for billing to analyze for capacity planning/decommission purposes.
Cloud vendors charge for small units of storage and compute. Since they must record these for billing, you can get them, analyze your usage patterns, then make decisions on “outages you can live with.” Passing cost reports to management can alter behavior.
The change in their anticipated versus actual spend:
Next steps – pull the usage data into a machine learning environment, to bolster/replace the current manual analysis and reporting. Looking forward to that.
They have PowerShell scripts that work to make the Landscape Manager work in a user friendly way, letting users control provisioning they’re authorized to access instead of ticket routing (and waiting).
(clouds over OCCC):