TeamCity best practices

Hi all!

Searching all the community forum by keyword 'best practices' gave me some posts, but I haven't find anything I interested in. Some of that was outdated, some that wan't replied with details.
For example this post has a quite serious  questions from Ryan which were not answered. I am interested in answers  too. So, questions:
What is the best practice for building a project under multiple  configurations(debug, release, etc., not TeamCity build configurations)  and platforms? Should each be a separate project, is there some easy way  to handle this via build parameters, or is it considered standard to  just lump them all under one configuration as separate build steps?

I am requesting the best practices for enterprise-level TeamCity using. I mean how to organise thousands of build configurations multiple OS and architectures, hundreds of projects, gigabytes of artifacts, tens of agents?
Let me try to cover more aspects in enterprise TeamCity deployment with asking more questions. How to manage all that configs and have the fast and reliable system at the same time? What is estimate resource consumpsion? Have you ever test your TeamCities under high loading?

So compiling the above:
How to organise the projects/configs
How to organaise resources for them
And keep the system run builds fast, respond fast and reliably?

If I missed something you think is important please add it to this thread.

Comment actions Permalink

Hi Bulat,

Lately, I realized that having a documentation about "Teamcity patterns" would be useful, with a structure similar to the (OO) Design patterns. Dealing with multiple build configurations from the same source code is a good example.
It's similar to your idea of best practices, though with a different approach, and more focused on the build definitions and structure.

My 2 ct.

Comment actions Permalink


Indeed, there is not much material around TeamCity which you can use and apply to your specific case easily.

However, I'd say that this is not because nobody has not yet created the content. I actually doubt such universal guidelines are possible in any concise format.

We communicate with lots of customers and see lots of setups. There are some patterns, but there is no silver bullet to tackle the specific problems a randomly picked organization faces.

The recommendations would be either too general (not related to CI at all, applicable to everyday's life of an IT department/company) or specific (based on the actual challenges of a specific case).

We are not in a position to educate on the former and the latter is more in the area of consulting and professional services.
At the same time we encourage everybody to contribute their experience and detail their use cases to the benefit of other users.

I should probably also note that TeamCity is not a solution tailored to solve a single task in the best possible way. Such solutions often require tailoring the entire process to the approach considered by the tool's authors and often fall apart under the pressure of reality when tried to be applied company-wide.

In it's current state, TeamCity is more a professional tool which craftsmen can use to their benefit.
That's not to reject that TeamCity is quite easy to start with, but when you get past the basic setup and want to address the specifics, you need to apply the tool according to your specific case.

Answering specific questions of yours:

> What is estimate resource consumpsion?

These also depend a lot on the specific use case :). See the doc section for related notes.

> Have you ever test your TeamCities under high loading?

We do have some load tests.
If you have specific questions on whether a single TeamCity server setup will handle your loads, please detail your load projections.

Comment actions Permalink


We would gladly welcome such efforts and would certainly help!

It's just that preparing such content usually requires a bit different skill set than tipically found in a development team member.

Comment actions Permalink

So I'll go ahead and reply with some detail regarding our setup.  My team manages the builds/deploys for about 50 products, which equates to about 1300 build configurations and 30 agents.  We use VMs for the TC server and all the build agent servers.  The TeamCity VM (dual Xeon 2.13MHz running Win2008R2 w/ 4G memory) occasionally does struggle, as the Java process can top-out for who knows what reason.  We've recently started scheduling reboots of the entire TC environment once a week, and that has helped.

I'll focus on 3 products that release together (which I'll call a product set) and their TC strategy.  For this product set, I manage around 3 different development branches at any one point, so that means 3 different projects for the builds of the 40 components (separate build configurations) that comprise the set.  The idea is "build once, deploy many" since we only target a single platform (as this product is hosted at a data center).  The majority of the build configurations are triggered on the VCS.  This product set has 9 environments to which it deploys, comprising a series of development and production environments.  For these deployments, we use 9 projects for code deploys and 9 projects for database deploys.  Each of the build configurations in these projects have snapshot dependencies on build configurations in the previously mentioned 3 build projects.  However, 2 of the environments are for production and those have no dependencies for 2 reasons: so that they do not attempt to rebuild during a time-sensitive deployment, and so changing dependencies constantly is unnecessary (resulting often in human error).  If a rebuild is required for the 2 production environments, they are either triggered manually or I wait for them to finish before deploying (long story which I won't go into).

Of the 9 environments, 2 are production, 2 are for QA teams (on 2 branches), 2 are for dev teams (on 2 branches), 1 is for performance testing, 1 is for enterprise integration, and the last is for the next release candidate about to go to production.  Each of these environments use one of the 3 development branch build projects.  The release candidate environment depends on a build project intended just for the release candidate (since the release candidate could be ANY branch).  For this product set, we don't actively use the build artifact feature of TC.  Instead, the build projects compile, run unit tests, and "archive" their build artifacts to a network filer per revision along with their build scripts.  When the deployments run, they do not checkout from the VCS, but instead determine the latest version (by default, or can be specified) stored in the archive and deploy directly from there to the deployment hosts.  Deploys are quick this way since they don't have to checkout from an enormous VCS.  Integration tests are triggered following successful deployment to the lower environments.  The lower environments auto-deploy at particular times of day for the critical items and more regularly for the less critical.

For the build projects, the TC project name contains the name and release ID of the branch being built.  For the deploy projects, the TC project name contains the environment name, "DB" or "Code", and the specific release ID targetted.  This way developers, QA, etc. can quickly identify which projects are relevant for what they're trying to do.  Plus, it allows for our scripts that build wiki's with environment information to easily parse and report what is targetted to run where.  Of course, there are other scripts that analyze the target hosts and match them up with what's intended to go there to give you a "state of the union" report.

One thing that I strived for when setting up all of the projects for this product set is to reduce the maintenance of all the build configurations during release transitions.  I hate having to manually change VCS or parameters on build configurations all the time.  So, when an environment changes from one release to another, I only have to change 3 parameters in the project.  One of these parameters is used to denote the branch for the VCS, so that automatically switches the VCS for all build configurations.  Works great and has really cut the maintenance down to a dull roar.  It used to take 2+ hours plus rollout time to transition the environments.  Now it's 5 minutes plus rollout time.  Much of that savings is credited with our environment breakdown, where our lower environments are always targetted to the same branch type.  Meaning, we have a mainline and maintenance branch.  One of the dev environment points to the mainline and the other to the maintenance branch.  Likewise for the QA environments.

If TeamCity had a means to automate the switching of dependencies, I'd implement this product set in TC differently.  As I said, my goal was to NOT have to change build configurations manually and introduce unnecessary risk.  Anyway, I hope this little detail helps someone -- of course, it's much more involved than what I've said here, but it gives you rough picture of the strategy.



Please sign in to leave a comment.