Viewpoint Scrum Experience
At the beginning of our current release, the Viewpoint engineering team decided to split itself into multiple cross-functional, self governing scrum teams. Each team consists of 5-10 members that are able to perform all the functions needed to produce working software: from design to development to test. The Delta Force scrum team was focused on updating Viewpoint to support new Teradata 14.0 database features. This article talks about some of our challenges and successes in transitioning to a more agile, scrum oriented team.
We decided to move to this approach at the end of our last release, when we realized that we were becoming a ‘scrum-but’ team; where we had adopted some of the principles, but were not really holding ourselves accountable to the core ideas behind scrum. As a team, we agreed that we could do better, and came up with this approach to improve our team performance.
What we did
We started by defining the roles of each member of the team. We identified a ‘Product Owner’ (PO), who would define the features to be developed, prioritize them based on market value, and adjust both the feature and the priority as needed, along the lifecycle of the project. We also identified a ‘Scrum Master’ (SM) who would ensure that the team was fully functional and productive. We then put together a cross functional team, who could take any feature from concept to completion: including design, development, test and documentation.
Our first task was to come up with an overall roadmap for the release. Though our delivery date was far off, we approached the planning with the goal of being able to release meaningful software at the end of each sprint. For this, we established high level user stories, came up with initial estimates, and attached the stories into different sprint backlogs.
At the beginning of each sprint, the PO was responsible for defining the feature that we were committing to deliver. This was sometimes hard to achieve, since every detailed interaction had to be well defined, but was essential to getting good estimates, and being able to deliver a quality product. As a team, we came together for about an hour for our sprint planning. In this meeting, we broke down the feature into tasks, and also discussed any prerequisites or dependencies that were needed. Once the tasks were defined, we used our combined experience to provide the best possible hour estimate of work, while allowing for some degree of uncertainty or risk for unfamiliar features. This estimation approach worked well for our team, rather than some other typical techniques like story points or planning poker activities, which are usually more time consuming. The SM was responsible for tracking team members’ vacations, availability for the different tasks, leaving time in the sprint for customer issues, etc. After the estimation, if we found that there was not enough time to complete all the tasks for the given sprint, then we’d move it into the next sprint’s backlog (or product backlog).
Our focus each sprint was to make sure we could start and complete a feature. That means a fully designed and tested piece of software that we could deliver to the customer by the end of the sprint. This was our guiding principle for knowing which sprint tasks should be targeted. During each sprint we created tasks for performing code reviews, fixing bugs, design meetings, etc, that take time away from the actual sprint task; but which are also vital to the overall quality of the end product. Allocating specific time for the overhead tasks created both visibility for time tracking, as well as promoting accountability within the team for performing these activities.
One of our more challenging experiences moving to Scrum was with iterative testing. It has been a long learning experience over several sprints to get the development and testing aspects of the team to mutually support one another, and we are still getting better at it. The developers were used to finishing a feature towards the end of a sprint, while the testers were used to testing the feature in the following sprint. However, this did not meet our goal of delivering tested software at the end of a sprint, so we worked as a team on changing this mindset. The next type of problem we found during iterative testing was developers prematurely declared a feature as ‘finished’ even though it might only have been a subset of functionality that was truly ready for test. This wasted testers’ time in testing incomplete software and creating inaccurate bugs for developers because truly, the software wasn’t completely ready to be tested yet. Our team evolved in how we communicated this, to get past this area of confusion and ambiguity. We encouraged the developers to do a smoke test on our test server before handing over functionality to test, while the test team learned to quickly try out a feature in progress, to get another pair of eyes on the software, to identify any inaccuracies from what the specification stated—providing beneficial feedback earlier in the development cycle. This process naturally grew for us, allowing our team to have better quality and less confusion overall.
A lot of our challenges were fixed by improving our communication over time. Our daily scrum meeting with the core team was our most important communication checkpoint. In addition, we also established a skype chat room for our team, where we could keep each other informed of different issues. We worked hard on conveying our progress or velocity both for ourselves, as well as for external observers, by our burn down chart. However, this only worked if every team member was actively engaged in logging their time. This felt a bit tedious at first, but repeated reminders from the SM helped change this behaviour. At times if a team member’s task was blocked due to external factors, we’d stop logging time to it and bring it to the attention of the product owner, to follow up with the outside party. This accurately showed how our team velocity slowed down on the burn down chart, as the team was waiting on something beyond its control.
We got to show off the results of our work in a sprint in the demo held at the end of each sprint. Our demos usually went very well, as we could display quality software to the larger audience. We also held a retrospective at the end of each sprint, which was probably the single most useful thing in our scrum experience. In the retrospective, we tried hard to get every team member to comment on something about the sprint, be it praise or confusion or other ideas that the entire team could listen and respond to. This is where most of our team evolving took place, identifying problems and taking action to improve our process. During this process, we transitioned from being driven to deliver something because the schedule dictated it; to having each team member taking ownership for delivering a quality product that we had signed up for, worked on together, and could take pride in delivering at the end.
Authors: Greg Gibsen, Betsy Cherian