User Experience Insights
There Are No Failures In Usability Testing
I posted about scheduling a software usability session, and this week had my one-on-one with developers. Without going into any specific details of the software platform (other than of course SAP HANA is involved), my take on the experience, particularly compared to the previous in-person ones.
Obviously, the whole sign-up process differs from the table-in-the hallway that I’d seen in conferences, where you’d speak to a person about the costs and benefits, haggle about timing availability, and get a response or escalation. But in a virtual sign-up, the registration is like many other software gauntlets where you must provide personal information to prove you can return responses that make it worth the developers time. Being there in person skips that commitment stage, with the obvious assumption that you traveled somewhere to learn and talk about business software competently.
With the virtual appointment, there was the second step unneeded in physical settings where your application is approved before you can proceed to the tests. While I was certain I could “add value” I wasn’t quite sure I’d get through the gate. Was I useful enough, not having been hands-on an SAP HANA system in any substantial way since my last career move? I thought so, or I would not have bothered, but if we didn’t have doubts about our competence we wouldn’t be human.
My approval went through, and I didn’t need to try any of the “end-around” steps I had imagined might be needed if they asked who I was, and why I was bothering. At my appointed time, I clicked on the supplied link and went into the virtual test zone. As suggested, I started early to get my home gear ready for the experience. Another difference from conference events where the developers cart all their gear into place, and you just show up. As a personal note, I have more than one computer available but that inventory includes devices down to a $15 Raspberry Pi Zero with a web browser that may or may not have enough power to run a video conference app. I was able to chat before the test started and was advised a mouse would be handier than a trackpad. Fortunately, no equipment glitches on my side, though it was dicey at the end when I had to open a web page on my side (I had tossed a coin and left Chrome running, with my usual 40 or 50 tabs open…).
Whether you can contribute to a usability test doesn’t matter as much what applications you might use or have used, I believe, as much was whether you can pick things up quickly, comment about what you are doing and seeing, and, in a sense, analyze on-the-fly what you are being asked to test. Certainly the teams want to have a diverse set of testers as people notice and adapt in different ways, so feedback is the commonality. I also remember if you were one of the later-scheduled tester and pointed out a particular glaring error, the facilitators would be at the point where they’d say “yes, that’s broken, let’s move on.” So more diverse views could generate feedback that hadn’t been trod previously.
As an aside, I needed to sign privacy (non-disclosure) beforehand, similar to the in-person arrangement. In addition, I was asked if I wanted to leave my camera on during the session recording. I needed to also agree to being recorded, all the usual precautions for sharing what some think might be confidential business details (though I think this is more being candid than being revelatory). Comparing in-person software test observations with remote sessions, in the former developers can (I expect) read body language about being tense, confused, or getting into the workflow rhythm.
Before we started, I had attracted a little crowd. My recollection of in-person tests was there was normally me, and 2 others, one running through the example scenario tasks and the other taking notes and making decisions. I can’t say who are hands-on developers and who are water-carriers when it was in-person, and definitely less so remote. We ended up with me, and 4 others, only one of whom was on camera at any time, and only 2 spoke more than a “hello.”
I felt a sense of accomplishment getting through the scripted scenarios. I don’t know why, as the introductions always insist that *I’m* not being tested, but the software is. This is testimony to the art of test-writing, which every teacher will understand. Too easy, and every student breezes through without revealing much; too hard, and the test doesn’t get finished, and both teacher and student are frustrated.
Since I can’t talk about the software I tested in any meaningful sense, let’s just say creating, sorting, and pushing documents around was the general idea. I had more recent experience with enterprise printer applications so I think I could leverage that into useful comments. But on the front end, I never created orders or contracts or the major business documents that a BPXer might. Probably the most cogent feedback I had in that regard was knowing end users might want varying levels of confirmation, depending on their routine and rules.
A primary caveat in the usability testing I’ve done is the affirmation that I, as the proxy end user, am not being tested, and any results do not reflect on me or my actions. I understand that on a literal level, yet I think anyone presuming to add value as a tester feels responsible to contribute at a high level, and slipping on obstacles such as personal or equipment limits can be detrimental.
For me, one challenge was keeping the instructions and action areas in focus at the same time. With my own browser session, I could leave the documentation tab open but pull it out of the main session so I only needed to glance to one corner and see if I was supposed to merge, split, or redact certain documents at my disposal.
Go ahead, test code, you’ll like it, they’ll like it.
(The post title refers to the old saying there are no failures, just to communicate)