"Tests Passed" count does not match the log

I have a build process that includes some grunt unit tests.  Nothing fancy about it.

When I do a build/deploy it shows "Tests passed: 531".  However if I open the "Build Log" and go to the unit test step I find "Completed 547 tests in 425 seconds. 0 failed, 547 passed".  The number it shows for the build is consistant, if I do 3 builds in a row it always shows 531.  I've searched the build log for the number 531 but it doesn't exist, so I don't beleive it's a simple parsing error.

Any suggestions on things I should check?  Thanks!

6 comments
Comment actions Permalink

Hi Chris,

What TeamCity version do you use? Since TeamCity 9.0 multiple tests with the same name within the same build are considered a single test with an invocation count. If any of these test runs fail, the whole test is considered failed in the build. For more detailes see the related issue TW-24212.

0
Comment actions Permalink

We are on TeamCity Enterprise Version 9.1.1 (build 37059).

Thank you for your reply Alina.  I am investigating it further now to see if that is our issue.  I edited this post to remove something I thought was the cause of the problem but ended up being a dead end.

0
Comment actions Permalink

I've been trying to simplify our tests down to the bare minimum that exihibit the problem.  That seems to be about 10 tests with several screen shots and several decisions.  TeamCity showed 8 passed, but the log showed 9 passed.  I confirmed that all of those tests had unique names and unique screen shots.  They didn't to start with but after altering them to, the discrepency still exists.

I tried through trial and error to further narrow down the cause, but without being able to see more about where TeamCity is getting it's number from I haven't been able to.

0
Comment actions Permalink

Could you please attach screenshot of build step settings, build log, screenshot of Tests tab and CVS files with tests for the build?

0
Comment actions Permalink

Sorry for the delay in responding Alina.  My results were not being cnsistant so I wanted to take the time to sort out what was going on.

I now beleive you are right that the test naming is the problem.  I beleive I had a caching problem in our build steps that was muddying my results.  The problem only seems to manifest when decisions are involved, and when screen shots are involved.  As long as I make sure all the decision text in a flow is unique the problem does not happen.  However it seems to have more to it than that.  For example, the tests directly after a decision work fine reguardless of what the decision text is.  But if that decision contains further decisions, thats when things stop reporting correctly.

Thank you for your offer to diagnose further.  For now we are going to rename all our decision text to work around the problem.  Thanks for the help!

0
Comment actions Permalink

Thank you for the update! I'm glad that the cause of the problem is clear now.

0

Please sign in to leave a comment.