Republishing Results

There was a interesting question posted in my comments on the TRX file format, about being able to republish results to TFS Server. This is an oft requested behavior, and the short answer is "no, you can't". Once you've published it, it's there forever. You can, technically, delete an re-publish, but in the scenario DmytroL was looking at, it ain't gonna help. This is resolved in the next big release, which one can see a preview of this in the April Rosario CTP, but in that release, is focused on using the new manual test tool to republish results.

However, it is possible to create a solution with some coding to merge the TRX files (something we don't have in the product).

Lets work through DmytroL's questions:

1. Is it possible to publish several TRX files under the same test run (of course provided all the TRX files in question specify the same correct test run GUID in the TestRun tag)?

No. You can't. If you try to publish a run that was already published (irrespective of the actual contents), we will block you from publishing -- you'll receive an error that the run has already been published.

2. Is it permitted to overwrite the outcome of a certain test within the same test run?

No. There needs to be one outcome. While (With some simple testing i just did), it is possible to have the same test in the results file, with different executionId's, this is completely unsupported. I would expect things to become pretty sticky on the server if you published a file like this (if you have a test server, you could try... YMMV, everything may break, no I won't help you if it takes down your production server...)

3. Are there any potential problems if these TRX files are published by different people?

See (1) & (2) :)

The scenario behind all this is: a test run is a collection of manual tests put together to verify certain functionality (I know there's a notion of test list for this, but one can't slice the TFS warehouse cube by test lists). Now, subsets of the test run are assigned to different QA engineers for execution. So, each QA engineer goes through her "share" of tests and publishes the results - but the results from all QA Engineers involved should go under the same test run.

Write a tool that loads the results from each run, and then sorts them by time, and pick the post recent result (or whatever criteria you decide on), and output a single TRX, and publish

Also, it's important to note that you can slice the warehouse the way you want to -- when you publish multiple runs against one build (which is allowed, supported and expected), the roll up data goes into the warehouse with the most recent result being the "roll up" result for that test.

Comments