Hopefully I will be able to provide a bit of insight into using VersionOne to plan some of your QA and testing activities.  My background is in QA (engineer, lead, manager) and Agile culture (ScrumMaster, project manager).

VersionOne is an agile project management tool with Acceptance and Regression Testing features/functionality; as such, it allows an organization to track the testing activities that contribute directly to the progress of a project, such as workitem completion.  These features provide particularly for the planning and tracking of the work itself in the context of the project (the ‘what’) rather than in the context of the product being developed (the ‘how’).

In terms of agile practices, Acceptance Testing validates that a feature meets the customer’s expectations; in many cases, it is the customer (whether a ‘real’ customer, a Business Analyst, or Product Owner) who defines the acceptance criteria, giving the QA members of the team the guidelines of what to expect in the feature to validate that this has been implemented satisfactorily.

A common practice of testing in agile is for QA members to pair up with developers working on the same story from the beginning.  The test cases for the acceptance test are agreed upon, and testing begins as soon as code is being produced (where it makes sense to start the validation process, certainly).  Many teams choose not to log defects against the new code, but rather, as the QA engineer and the developer have paired up, collaborate on bringing these bugs to the fore and address them on the spot, writing new unit tests and pushing out new code through their Continuous Integration system.  The acceptance tests tied to the story may provide as much detail into specifics of the feature (when VersionOne is used as the only tool to track the testing efforts), or may be used simply in binary form – for a pass/fail status.

Some common questions about testing using VersionOne:

I know I can write Acceptance Tests for each story.  Where do I write integration test plans?  Sanity test plans?  Performance test plans?

If these types of testing/test plans contribute to the immediate acceptance of the story, certainly additional tests can be created for the story.  These additional tests may also be used simply as binary markers – pass/fail – for the story; many teams that choose to use this workflow are mostly using third-party test management tools, many of which integrate with VersionOne.

Is there a way to distinguish between tests written by a PO or a QA team member?

A suggestion would be to use a custom field, where you would add a list type value for each originator.  For instructions on how to do this, in VersionOne go to Help > Contents > Administration > Custom Fields.  Once you create this field, you can add it to a pertinent grid, such as the Backlog, or use it to filter your stories in any list.

I have assigned Regression Tests to a Test Suite; how do I add them to a Test Set?

Test Sets are instances of all the Regression Tests in a Test Suite.  Once you have added Regression Tests to a Test Suite, you have the ability to create these Test Sets.  In the graphic below, notice that there is a button to Generate a Test Set – this creates the instance of all the tests in the suite, and can be scheduled to a Release or Sprint:

When I create a Regression Test, is there a way to add it to a Theme or Feature Group?

Regression Tests are essentially templates for Acceptance Tests; every time that a Regression Test is added to a Test Suite or Test Set, it creates an instance of that Acceptance Test, with the parent Backlog Item being the Test Set.  Acceptance Tests are related to a parent Backlog Item as well, whether it is a Story or Defect.  Tests in VersionOne are not related to Themes – the parent Backlog Items are.  A Test Set, as a parent Backlog Item for Regression Tests, would be the asset to which you can associate a Theme.

More to come :-).

Join the Discussion

    • Mike McFarland

      This area of testing is of particular interest to our group. I would like to see some stories about how testers convey failures to the rest of the team. Is there an automatic email like Bugzilla? How do people handle defects found during ‘exploratory’ testing? Do they write a defect and include it in the sprint?

    • Chris T.

      In a sort of homegrown way, the following has evolved out of our explorations:
      – sometimes the scrum team deals with the issues without recording them just by chatting with each other, that way if something is blocking testing, the fix can be incorporated into the build before the end of the sprint, which allows for continued testing.
      – if somewhat later in development, the teams sometimes put TESTS on hold (‘in process’) records a new “fix it” task for the developer, and then the TEST is set back to ‘ready’ when the developer is done.
      – some teams ‘fail’ the TEST, create a “fix it” task for the developer. Once the developer fixed the problem, the DEV creates a “test it’ task for QA.

      We have yet to determine what is ‘best practice’ mostly because what makes sense early in the DEV cycle, may not make sense later…. and of course there’s the tendancy to want to “process-ize” and “document” everything anyway.

    • Sahil Jain

      What is the difference between “Story Tests” and “Test” in Story Details page?
      Where is the recommended place to write acceptance criteria?

    • Victor Hernandez

      Hi Sahil,

      A VersionOne Test asset always has a parent – in the case of Tests associated with a Story, such as visible through the Story Details page, teams usually refer to it as a ‘story test’, though it is not called out as such in the system.

      For writing Acceptance Criteria, we often see two approaches:
      – Including it as part of the Description of the Story
      – Creating an Acceptance Criteria Test for the Story

      I personally like the latter approach, as it allows a Product Owner to review the Test, and either pass or fail it to explicitly, making it highly visible.

      Hope this is helpful!

      • Sahil Jain


        I agree with the latter approach as well.
        Currently we have been placing them in the “Story Tests” section.
        Although I agree having them under “Tests” makes sense from the product owner perspective to pass or fail them- what is then the “Story Test” section utilized for?

        Further, if we put them under “Tests” section, do you anticipate any conflicts from the QA test scripts?

        Thanks so much for your advice,

    − 1 = 2