Time gaps in build steps

Hello,

One of our builds seems to have an 8 minute time gap between build steps.

Here are the last 2 lines of the build step from the build log in verbose mode:

[15:11:42]Process exited with code 0
[15:19:46]##teamcity[buildStatisticValue key='buildStageDuration:buildStepRUNNER_100' value='757674.0']


Here is the pertinent lines for build agent's teamcity-build.log.

[2015-03-03 15:11:43,190] out -

[2015-03-03 15:11:43,190] out -

[2015-03-03 15:11:43,244] out - Process exited with code 0

[2015-03-03 15:19:46,650] out - ----------------------------------------- [ DEPLOYMENTS::QA deployment #QADEPLOY_qa07_2.70.0_921(42275) (buildId = 180656) ] -----------------------------------------

[2015-03-03 15:19:46,654] out - Starting: /opt/srt/home/teamcity/buildAgent/temp/agentTmp/custom_script2796797246488642224

[2015-03-03 15:19:46,654] out - in directory: /opt/srt/home/teamcity/buildAgent/work/82ae2e47fb94f303

[2015-03-03 15:19:46,657] err - /opt/srt/home/teamcity/buildAgent/temp/agentTmp/custom_script2796797246488642224

Is there a way to analyze why that time gap exists? I posted a different build problem earlier and am wondering if we have a communication problem between build agents and the TeamCity server.



Thanks,
-Dave
2 comments
Comment actions Permalink

Hi Dave,

Do you see this problem constantly? If yes, does this time gap always appear between the same build steps? Between what type of build runners, please attach screenshots of affected build steps settings?
Also please attach full teamcity-agent.log and a couple of thread dumps from the agent while it is in this state. You can take agent thread dump using Agent details page.

0
Comment actions Permalink

This seems to have been caused by an overload on the TeamCity server when all 14 of our build agents are building at the same time.

Here is the output of top at the time:

top - 02:45:05 up 56 days,  6:37,  1 user,  load average: 5.06, 5.48, 6.06

Tasks: 216 total,   2 running, 214 sleeping,   0 stopped,   0 zombie

Cpu0  : 51.0%us,  1.3%sy,  0.0%ni, 47.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu1  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu2  : 99.7%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.3%st

Cpu3  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu4  :100.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Cpu5  : 52.8%us,  2.0%sy,  0.0%ni, 45.2%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

Mem:   5968396k total,  5925124k used,    43272k free,  1322796k buffers

Swap:  3080184k total,   262912k used,  2817272k free,  1566148k cached


The TeamCity 8.1.5 server is running on CentOS6 VM and as you can see above had 6 vcpus. After we doubled the number of vcpus to 12, the problem seems to have gone away.

-Dave

0

Please sign in to leave a comment.