It’s a wrap for the 2021 Answer-Thon Contest
The 2021 SAP Community Answer-Thon ended in August, with announcements of the winners and participants in a story by Julia Russo (not to mention a post title that included the Unicode sequence F09F9191 [👑]).
A few behind-the-scenes and why-did-this-happen comments. The original vision of the answer-thon was to mimic the fund-raisers for public media and other events where a few volunteers get together virtually and put their muscles (physical or brain-wave) into a common task. Like barn-building even. We could not come up with a compelling reason for a bunch of SAP Champions to get together over a weekend and do nothing but try to answer questions. An alternate view questioned whether an AI could be let loose on the common unanswered questions and put a bunch of auto-responses on the sticking-up-nails. That didn’t seem to fulfil a human desire to get together during the tough period in recent history. The first Answer-Thon rounds were just a few of us in one-hour typical business meeting settings to review one-by-one those questions deemed interesting.
The initial sets of questions we looked at were on particular topic areas, or ones that had a lot of views but no answers or maybe even no comments. The public Answer-Thon we designed included topic areas (someone chose), fenced in by recent initial posts (with minor exceptions), and no accepted answers.
As the method we used to flag the questions “in play” in the contest was a tag on the question itself, we avoided having to keep a separate database or sheet of unanswered questions, but with one minor side effect. When the “ccat” tag was applied to just track the link, that page floated back to the top of recent “activity”. At first I thought that would skew the desired results by throwing random flashbacks into people’s streams, and revealing the tag before we planned to, but given removing the tags later would put each post back into the active stream, it just stayed out there.
Eventually, over 500 questions were thrown into the contest hopper, some having few views and some more. Along the way as I tried to gauge interest, I used variations of the URLs to check whether questions were being addressed.
This shows only those with accepted answers, with what looks like a dismal rate of under 4% success. Changing this filter to “answered” puts the number close to 100, meaning a clearance rate more like 20% than 4%. What prevents posters from going back and accepting answers is beyond my scope here, so let’s look at other numbers to measure how questions get to answers on the community.
There is a filter option for “no results”, which hopefully can be negated to derive “some results.”
This gives 245, meaning with a rule-of-thumb approach that about halt of the questions had no response and half had “some.” You’d have to have much more data than I do to determine if the site as a whole manages to get responses on most questions. I’d guess some topics get more than others, and certainly believe that generalized speculations like “is this possible” get read but no traction otherwise. I don’t see an incentive to put out that much work with little chance of return other than to feel good about contributing.
In the last post I did before the Answer-thon ended, I looked at 10 specific questions that had the “ccat” tag which I also found interesting. Understandable, if not containing enough clues for the detective, ABAP or not. Now, reviewing those questions to see if my calling attention to them resulted in any answers or even comments, I only see one with an accepted answer. I was hoping for more, of course, but having zero results would have been rather disappointing. I went ahead and commented on all but the answered one to see if the original posters had feedback. We’ll see if they intend to “close the loop.”
Was this Answer-Thon a success? If we only looked at the goal to be a significant increase in the number/percentage of answered questions, probably not much. Looking at community participation is tricky from the data I have; when I run queries like the above URLs, the number of contributors is only *21*. I assume those are the responders since there are definitely more than that number of questioners. And the figure does not change as I select different filters, so I’ll need to dig in deeper (with back office help).
How many tags did the contest cover?
Number of different tags is: 14 (this is main topics only, not all the subtopics underneath those).
Originally we were going to have a very narrow focus, so there was a bit of drift once the questions started being tagged. I found a discrepancy in that my views show 498 questions while the SAP Community team reports 583. More homework to validate the counts needed.
OK, it’s not 21, it’s a different number. I have no idea why this changes as I move from page to page, because sometimes it is constant but other times another number. Here, it’s 4:
I want to thank the SAP Champions that provided feedback, moral support, and answers to the community, other community members like the 21 (or more) contributors, and the Community team, particularly Julia and David.
Hope to see everyone when we launch the 2022 Answer-Thon. New prizes? Maybe. New questions? Definitely. More answers? Yes, that’s the plan.