Building with .NET Core and Docker in TeamCity on AWS
At Control F1, we're always evaluating the latest technologies to see if and how they'll fit with our clients needs. One of our core strengths is in .NET development, so we've recently been looking at the newly released Visual Studio 2017, along with .NET Core 1.1 and combining this with our ongoing use of Docker to create microservices. We always like all our projects to have continuous integration to ensure a consistent and repeatable build process - in our case, we use a TeamCity instance running in AWS for this. However, actually getting everything to build in TeamCity Training wasn't quite as easy as we would have hoped due to a few minor niggles, so I've put together this blog post to capture everything that we needed to do.
Prerequisites
Some stuff I'm assuming you've already set up:
- An AWS account! You'll need full administrator rights or very close to them as we need to create some IAM objects.
- A TeamCity 10.x server, configured to allow on-demand EC2 agents. Hopefully everything here will also work for TeamCity 2017 - we just haven't quite upgraded our server yet.
- The work we've done here uses Amazon Linux, but should be relatively easy to port to your Linux distribution of choice. I've tried to point out where changes may be needed.
Check out Snowflake Training Online here!
Creating your AWS infrastructure
You'll need to create (or reuse) a number of bits in AWS:
- An S3 bucket to store the artifacts needed to bootstrap the build agent. We have a bucket we use for a number of devops-related bits, so created a subfolder in the bucket for the artifacts we need.
-
An IAM policy which allows access to the artifacts in the S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1491312033000",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<name of your S3 bucket here>/<name of the folder in your bucket>/*"
]
}
]
}
Hopefully it's fairly obvious what that policy does, but if not it just allows access to any objects in the specified folder in the S3 bucket.
- An IAM role to give the build agent access to the policy you just created. From the AWS console, create an "AWS Service Role" of type "Amazon EC2", and attach the policy you just created to the role:
- A security group which allows the server access to the build agent on port 9090. The best practice way to do this is to add a security group to your TeamCity server, and then create a Custom TCP rule with protocol TCP, port range 9090 and a source of the security group on your TeamCity server. You'll also at least temporarily want to allow SSH access from your IP address - this can be removed once you've set up the machine:
- An EC2 key pair to allow you to SSH into the build agent.
Populating the S3 bucket
Everything the build agent needs to bootstrap itself is stored in the S3 bucket. You'll need to upload at least three files into the bucket (note: please remove the .txt extension from 'make-build-agent' and 'buildAgent' before uploading them to the S3 bucket):
- make-build-agent: this is the primary script which runs to install everything needed on the agent. Note that you will need to customize this script for your environment:
- Replace all instances of "
<name of your S3 bucket here>/<name of folder here>
" with the appropriate details for your S3 bucket. - Replace both instances of "
<DNS name of your TeamCity server here>
" with the DNS name of your TeamCity server. - If you need any SSL certificates installed on the agent (we need a couple for the TeamCity server and our local NuGet package source), upload them into the bucket and modify the appropriate lines in the "Get the necessary SSL certificates and install" section. On the other hand, if you don't need any certificates, you can remove the "
sudo update-ca-trust extract
" line. - If you're using a Linux distribution which uses apt rather than yum, you'll need to make some minor changes to the "Install the packages we need" section.
- Replace all instances of "
- buildAgent: a very slightly modified version of the script from the TeamCity documentation to start the TeamCity agent when the machine boots. This is potentially the one thing that you may need to significantly change if you're using a different Linux distribution, particularly one which uses systemd rather than SysV init.
-
The Docker SDK build targets. As of April 2017, there is currently an issue that adding Docker support to a Visual Studio 2017 project breaks the command-line build tools. To work around this, you need to install a couple of files from the Visual Studio 2017 distribution onto the build agent. Unfortunately, as these are part of the Visual Studio distribution, I can't make them available here... Zip up the contents of the "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\Sdks\Microsoft.Docker.Sdk" directory (or wherever you have Visual Studio 2017 installed) and put this into the bucket as "Microsoft.Docker.Sdk.zip". The exact structure is that the "Microsoft.Docker.Sdk" directory should be inside the zip file:
$ unzip -v Microsoft.Docker.Sdk.zip
Archive: Microsoft.Docker.Sdk.zip
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
0 Stored 0 0% 04-03-2017 10:57 00000000 Microsoft.Docker.Sdk/Sdk/
1750 Defl:N 679 61% 03-14-2017 11:15 526dbbe8 Microsoft.Docker.Sdk/Sdk/Sdk.props
1264 Defl:N 530 58% 03-14-2017 11:15 a3c37327 Microsoft.Docker.Sdk/Sdk/Sdk.targets
-------- ------- --- -------
3014 1209 60% 3 files
Configuring the TeamCity server
If you haven't already done so, you need to install the .NET Core plugin on your TeamCity server. From the "Plugins List" page on the server, upload the plugin zip and then restart the server. Unfortunately, there's no way to restart the server from the UI so you'll need to SSH into the server and run a few commands:
cd <path to your TeamCity install >
sudo . /runAll .sh stop
sudo . /runAll .sh start
|
Making a build agent
Actually creating a build agent is now a fairly simple task. Create a new EC2 machine in AWS, using the IAM role, security group and key pair you created earlier. Then simply SSH into the machine and run:
aws s3 cp s3: // <name of your S3 bucket here>/<name of folder here> /make-build-agent .
chmod +x . /make-build-agent
. /make-build-agent
|
At this point, it's probably worth performing a simple smoke test. Reboot the machine and after a couple of minutes it should appear on the "Unauthorized Agents" page on your TeamCity. If it doesn't, you'll need to SSH into the machine and look at the agent log files to try and work out what's going wrong.
If you want to perform more extensive testing at this point (i.e. checking that the build agent can actually build stuff rather than just register with the server), manually authorize the agent and skip the next bits about setting up a cloud profile. If you are going to create the cloud profile now, first shut down the machine and create an AMI from it. On the "Agent Cloud" page on TeamCity, select "Create new profile". Set "Cloud type" to "Amazon EC2" and "Region" to whatever AWS region you're using for your infrastructure. Next select "Add image" and set:
- "Image" to the ID of the AMI you've just created.
- "Subnet" to an appropriate subnet in your VPC.
- "Key pair name" to the key pair you created above.
- "Instance type" to something appropriate for the build jobs you'll be running; we use a "burstable" t2 type machine, but you may want to consider an m4 type machine if you are going to be continually running builds.
- "Security groups" to the security group you created above.
Creating a build configuration
Main build
- Create a new project in TeamCity pointing to wherever your source code is and proceed through to create the project.
- When creating the VCS root for the project, add an explicit reference to the branch you want to build to the "branch specification" section, something like "refs/heads/develop" if you're using Git.
- Ensure that the project has access to three parameters: "docker.hub.organization", "docker.hub.username" and "docker.hub.password" which specify your organization and a valid username / password combination on Docker Hub.
- When it comes to creating the build steps, do not accept the auto-generated steps: TeamCity seems to spot the .csproj file and think that this is an old-school .NET Framework project.
- Instead, add a build step with runner type ".NET Core (dotnet)", command "restore" and a working directory set to wherever the .sln file is. Add the name of the solution file to the parameters section - this is necessary as the Docker support puts the .dcproj file in the same directory as the solution.
- Add a second step with the same parameters but instead a type of "build".
- In the "Agent Requirements" section, add a requirement that "DotNetCore" equals "1.0.1". (Yes, that is really "1.0.1" - it refers to the SDK version, not the framework version itself).
Tests
Unfortunately, "dotnet test
" doesn't have the same behaviour as "dotnet build
" and needs to be run in every directory. Create a "Command line" step which uses the following script to find all "Test" projects and run "dotnet test" in the appropriate folder:
#!/bin/bash -e directories=$( find . -name '*Test*.csproj' - exec dirname {} \;)
for directory in $directories; do
pushd "$directory"
dotnet test
popd
done |
Publishing the application
Nice and easy, this is just another ".NET Core (dotnet)" step (with the normal working directory and solution file name) with type "publish".
Building the Docker images
Very similar to the tests script, we iterate over every Dockerfile and build the appropriate image:
#!/bin/bash -e directories=$( find . -name Dockerfile - exec dirname {} \;)
for directory in $directories; do
pushd "$directory"
mkdir -p obj /Docker/publish
cp -r bin /Debug/netcoreapp1 .1 /publish/ * obj /Docker/publish/
name=$( ls *.csproj | sed -e 's/\.csproj$//' | tr .A-Z -a-z)
tag=$( echo %teamcity.build.branch% | sed -e 's!^refs/heads/!!' )
sudo docker build -t "%docker.hub.organization%/$name:$tag" .
popd
done |
The only subtlety here is the copying of the binaries to "obj/Docker/publish": this is due to the default Dockerfile created by the Visual Studio tooling, which ignores everything except that directory.
Publishing the Docker images to Docker Hub
Just another command line script:
#!/bin/bash -e sudo docker login -u %docker.hub.username% -p %docker.hub.password%
directories=$( find . -name Dockerfile - exec dirname {} \;)
for directory in $directories; do
pushd "$directory"
name=$( ls *.csproj | sed -e 's/\.csproj$//' | tr .A-Z -a-z)
tag=$( echo %teamcity.build.branch% | sed -e 's!^refs/heads/!!' )
sudo docker push "%docker.hub.organization%/$name:$tag"
popd
done |
Conclusions
So, what did we learn from all this? Pretty much what's expected from using the latest and greatest sofware: it basically works, but there are some rough edges that we need to work around. The other important thing to come out of this work was the automation of the build agent creation process itself, both from purposes of repeatability (so that we can ensure we can make a new build agent if we need to, for example to make sure we have the latest security updates) and making it easy for the rest of the team - while I did the majority of the work here, when another member of the team found that a small change was necessary (adding the second SSL certificate), he was able to get a new build agent up and running in less than 15 minutes. One of the principles of good devops is "automate everything" and it's showing its value here.
Please sign in to leave a comment.
Thanks for the detailed description. It will surely be useful for more people.