Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

The importance of proper communications for planned events, such as upgrades, is a given to us in business process support. Communicating with your key users in the business is a pleasant responsibility when the system is up and running and the message is a happy one, such as new and eagerly anticipated functionality, but when things go wrong, finding the happy medium between too much information and not enough can be tricky. I was reminded of this challenge yesterday, when I received an alert on one of my development servers. The email message from the system monitoring certainly looked official and precise, and it included all the essential identifying information about the name and location of the server. Then I read further, trying to find what exactly was going on with my system, and I found this key message:

 

Status: Bad

 

Well, now, there you have it. Bad. Hmm. What was I supposed to do about that? No additional information was provided. Threshold level? Blank. Current level? Blank.

 

I shared the message with an SAP Mentor, adding a sarcastic comment, "I just hate it when these server support geeks use all of that technical jargon."

 

One of the first things they teach journalism students is that the first paragraph of a news report should include the 4 Ws: Who, What, When, and Where,  in terms the reader is likely to understand. Details can follow in subsequent paragraphs, including the Why of editorial commentary, but someone reading only one paragraph needs to be able to come away with the gist of what happened. It is a technique I try to keep in mind whenever I communicate with my key users, and I suggest that it is particularly important when communicating system problems. An overly technical explanation can lose the reader in jargon and minutiae, but a message with insufficient detail can be just as frustrating.

 

When I inquired with the server support, they could not tell me any more about what was going on with my server. I could still log in and use the system, and the services were running as expected. I shrugged off the message and went back to work, and I've heard nothing further. The nightly scheduled jobs ran successfully.

 

Later I showed the alert to my manager, and we had a good laugh over the alert messaging at this support level. I jokingly suggested that, if we wanted an alert that was actionable, we probably needed to upgrade from that "tin" service level to the higher priced "platinum," high availability support.

 

For me, the incident drove home the importance of communicating what the users need to know, in terms they understand so that they can respond appropriately, at neither too detailed nor too high level, using neither too much jargon nor vague  statements that result in frustration rather than understanding and appropriate action.

1 Comment