Can you change Cache Expiration on Artifact downloads?

I was thinking of using a squid proxy to cache artifact downloads to improve performance in remote offices. I watched an artifact download and saw that the artifacts are indeed cacheable:

Response Header

Accept-Ranges:bytes
Cache-Control:max-age=3600 Connection:Keep-Alive Content-Disposition:attachment; filename="buildSystemProperties.zip"; Content-Length:46561 Content-Type:application/zip Date:Thu, 12 Jul 2012 21:14:52 GMT ETag:"259bdc91b81bb3e77c1d61f4061cd270" Expires:Thu, 12 Jul 2012 22:14:52 GMT Keep-Alive:timeout=15, max=87 Last-Modified:Thu, 12 Jul 2012 20:52:58 GMT Server:Apache-Coyote/1.1 Set-Cookie:RememberMe=110174604^35#5219796908785302220; Expires=Thu, 26-Jul-2012 21:14:52 GMT; Path=/; HttpOnly Via:1.1 teamcity



Is it possible to increase the Max-age/Expiration to another value? For our purposes, one hour is too short. We can override cache expire in Squid itself, but this is not preferable.

3 comments
Comment actions Permalink

Anthony,

TeamCity build agents already have build-in artifacts caches. Do you find them inefficient? (btw, recent 7.0.4 release contains related fixes and improvements)

0
Comment actions Permalink

Nikita,
We have been using TeamCity to deploy our software for years. The problem we have had is efficiently getting files to our remote offices. All of our build agents are on machines located in our central office, yet we have offices half way around the world. If we used the build agents directly and had to deploy a 50mb (compressed) application to 10 desktops half way around the world, the application would be copied 10 times, which is very slow and involves copying the compressed file 9 more times than is needed from the central office to the remote office.

We could have build agents in the remote locations, but this would mean we would have to triple the number of agents we have for deploying and require setting up machines to host these build agents, which is not easily acheived due to resource contraints.

What we do now is copy the files to a DFS root that we manual replicate to the remote offices. Then we use the build agents to schedule a job on the machine that runs the install from this DFS root. Since the file is already in the remote office, it does not have to come back to the central office for the file, saving 450mb in long distance file transfers.

While this system works, we lose some transparency and also the artifact handling features of TeamCity. I was thinking I could make rest/http calls back to the TeamCity server for the file through a proxy, like squid, from the target deployment machine thereby caching the file in the remote office. The first download would obviously take a hit, but subsequent downloads would be very fast. The build agents would not be pushing the files to the deployment machines, rather the deployment machine would be pulling them from TC. This would also help end users who want to download a file from our TeamCity server directly from a remote office. We would want these files available until we release the next version. In our office, that would be two weeks (our current sprint length). One hour isn't long enough and I would like to avoid ignoring the cache timeout in squid (which is a little hacky and generally a bad decision).

0
Comment actions Permalink

Anthony,

Thank you for your feedback.

Unfortunately, there is no way to configure or change this value. I have filed an issue: http://youtrack.jetbrains.com/issue/TW-22530

Please vote/watch is.

0

Please sign in to leave a comment.