Hi, my name is Vladimir Gershanov, and I’m a part of SAP ID Service IT team. I’m working at SAP for a bit over 5 years, as QA Engineer and have worked in a few projects since I’ve started here. This is my first blog entry on SCN.
What I’d like to discuss here is the relatively special setup of our team and the QA role in it, in regards to Agile guidelines.
First and foremost we are a distributed team. We span over 4 countries and 4 different time zones. In Agile the team is supposed to be located at one place in order to have the daily standups, planning/review and retrospective meetings and of course to allow pair programming and etc.
Gladly with the current technology we are able to overcome this obstacle by having a daily scrum call at a certain time which suits everyone and is scheduled by the Scrum Master. I’m not sure if it was very comfortable from the start (compared to normal local daily standups) but the team grew on it – speaking respectively each in their turn and keeping it short and to the point while keeping all the lengthy/in depth discussions till after the daily round is over or taking it offline. It wasn’t as easy as it sounds, since we all want to discuss the raised topic (if it’s a serious issue or we just have something to contribute to it) right here, right now, but with time we grew to know better when is the time and the place to discuss anything that is not related to the daily update and in depth. So we now can keep the daily scrum call to 15min or even less in some cases (at the very first sprints we could easily get to 30mins or more).
All meetings (daily scrum, planning, review and retrospective) are done together with screen sharing so we can all keep up on the subject. During Review meetings colleagues demo the features that are ready for production in the end of the sprint. Each sprint contains at least minimal shippable content.
Thanks to technology we’ve overcome another obstacle of being a distributed team: Pair programming. By scheduling 2 daily 1 hour slots for pair programming (which is done once again with a voice call and screen sharing), team members have the opportunity to schedule with one of the colleagues one of these slots to work on something together or help each other with a problem resolution or code review and etc. Personally from a QA perspective I use these slots for issue analysis/debugging/finding a solution together with one of the developers.
In regards to QA and Test Planning and Execution:
The biggest change from being a QA Engineer in a Waterfall project into an Agile project was the day-to-day activity. In Waterfall with monthly releases as we had it in Business Center, SCN and other projects, you have roughly 2 weeks to write tests and test planning according to the scope of the upcoming release which was already set and finalized, then 1 week of running tests on bugs fixed in the current release and another week running regression test sets across the entire platform.
In an Agile project this is almost completely different; the scope is more dynamic and is subject to change, for example: tickets can be pulled into a sprint from a backlog (for sprint backlog and product backlog definition please click here). Tests can be run on each ticket as soon as its fix is submitted into the build pipeline and the build is green (meaning it already passed a multitude of automated tests and have been successfully deployed to QA landscape automatically) – there’s no need to wait for a daily deployment (as in waterfall), and you don’t lose a day (or so) if that deployment fails. Build pipeline is a term from “continuous delivery” or “continuous integration” concept. In short what happens in a build pipeline is that every code commit triggers a build. Each build goes through a lot of automated tests. If any of the tests fail – the build fails, and it’s the responsibility of the last code submitter to analyze why the test has failed and get it fixed. So when the build is green (all the tests have passed successfully), it means that the latest code with the latest features/changes have been successfully (automatically) deployed by the build job to the relevant system. In SAP ID Service setup, the build is green when the tests have passed on Development, Test and QA environments. So once a build is green we have the latest code on all 3 landscapes mentioned above. These builds are triggered several times during the day which means that we have the latest working code on the landscape we work on. Production deployment is done separately after each sprint’s end, and after additional tests have been done by QA and we’ve provided the go-ahead for the deployment to production.
When we started the project we had a basis of plans and scenarios which were derived from other projects which now incorporate SAP ID Service for all the User Management and other scenarios. We kept and maintained that in Quality Center. However with time and new functionality and mainly the new approach we have mostly replaced that with automated testing which is done in each and every build that is triggered. Tests are being written in Cucumber and automated with Java (our main development language) as well as unit tests where Cucumber is not applicable. We have hundreds of tests written and executed. In our team the automation is done by developers, since personally I don’t have the development knowledge for the task, although in Agile the Developer and QA are frequently the same person (no dedicated QA person). So in our team I contribute in reviewing Cucumber scenarios, suggesting changes/additions of cases and contributing my own (occasionally). In regards to test case writing, we mainly write down the boundary and special/interesting cases and bugs in the old Quality Center. Due to the above the amount of manual testing has shrunk with some time, as more and more automated tests (at least the happy path and validation scenarios) have been implemented. However a core of most important and also cross platform tests is being executed manually and more frequently – per sprint release or when a mid-sprint deployment is required. This core regression test suite is adjusted according to new developments, recent bugs and changes per each sprint.
The ability to push a fix to production in a relatively short time when a high priority issue is discovered has become real and much easier with much lower risk of messing things up on Production. There is no week for test planning as the testing is continuous, which is done with each build. The aforementioned core test cases which are part of our regression testing cycle are performed on QA then Productive landscape inactive pool and after it passes both, then we switch the node pools on Production and run another cycle of these core tests to ensure everything indeed works well. Having 2 pools of nodes for SAP ID Service Productive landscape allows us to push to PROD with zero downtime which is a huge benefit.
Additionally to QA tasks and activities, I also act as 3rd level support, where I analyze, resolve and (in case there is actually an issue on our side) create tickets in JIRA bug tracking system – for those issues which couldn’t be solved by 1st or 2nd level support. So I work regularly with IT Direct and CSN systems as well.
I’m pretty sure that there are more topics that I can share about our setup, so if you have questions about any part of this topic or regarding your own Agile setup or any concerns/difficulties, you are most welcome to share them and we can develop an interesting discussion and knowledge share under this topic.
What does your team’s Agile setup look like?