I'm running a fairly sizeable teamcity operation. Some 20 agents, 140+ projects and nearly 1000 build configurations. We build mainly windows stuff, but also linux and mac. With the recent introduction of git, the load on the teamcity server and agents increased, as the developers start to push and build their feature branches. All that is good and well.
The build system for windows is split up to allow effective usage of agents. In other words, one configuration builds the debug binaries and exports them as artifacts, to be followed by let's say fxcop, unit tests and running moma. Each subsequent step imports what it needs from its upstream. We are shifting from push-based to pull-based build chains.
To conserve disk space I initially set the global retention policy to keep the last 5 builds. However, with all those builds and branches 5 was too low. There could easily be 10 active branches in the larger projects. Thus, I increased the retention to 15 builds, which immediately forced me to hand a few 100 GB worth of disk to the server. While disk in general is cheap, the SAN mounted disks my IT dept gets is significantly pricier than your average USB mounted 3TB disk for home use.
The problem is that when a configuration has been cleaned and a downstream configuration wants to import the artifacts, I get an error (Something along the lines of 'failed to download artifacts'). The history is still available (I keep that for a year), but the artifacts are gone. This makes it very hard to reproduce an old build. Not totally impossible, as I could presumably create a "dummy" branch in git from whatever historical commit and push something to that branch. However, that is not really what I want to do.
My desired way of working would be to see the artifacts as a cache. If there is a cache miss (either due to the fact that the upstream configuration has never been built OR that the artifacts have been cleaned out) I want TeamCity to traverse the graph upstream and build what the downstream is asking for. That would of course cascade and force the entire chain to be rebuilt if I am requesting a rebuild from a commit from last month, but it would work. Now it doesn't.
The proposed approach would also reduce the need for an overly generous retention policy, as artifacts would be recreated (by rebuilding the configuration) when needed. A _possible_ caveat is two builds may not produce bit identical output (due to time stamping of files, injection of date and time by the compiler into the binaries or reports that contain date and time). Personally I don't see that as a major problem - the benefits outweigh this risk by far imho.
If what I am requesting already exists I'd be very happy if someone could point out how to change the configuration to make it behave that way.