TeamCity Kubernetes Support Plugin - Agents that can run Docker daemon

I installed the TeamCity Kubernetes Support Plugin and configured the Kubernetes Cloud Profile. I was able to deploy 3 agents into my Kubernetes cluster using the jetbrains/teamcity-agent image. The TeamCity Server connected the agents and they are available to my project.

Now, however, I want to be able to run build steps using the Docker runner. However, when I try running a simple build command `docker version` I get an error "Cannot connect to the Docker daemon at unix:///var/run/docker.sock". This is because the Docker daemon is not running inside the container running in Kubernetes.

The docs on Docker Hub for the jetbrains/teamcity-agent image says, "In a Linux container, if you need a Docker daemon available inside your builds, you have two options:"... The first uses volume mappings and the second one says start the container using the -privileged flat. I want to be able to run Docker in a container, or Docker in Docker. So I need to use one of these options but I'm not sure how to enable either of these two solutions in Kubernetes. 

Does the TeamCity Kubernetes plugin support the scenario I'm looking for? Has anyone achieved it and how? I was hoping this would be a supported scenario and one I could easily deploy. This would be a great use case for running agents inside a cluster.

Thanks,

Kelly

16 comments
Comment actions Permalink

Hi, to run container in privileged mode you need to specify this explicitly in the pod/deployment configuration. More information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

You also need to configure your cluster to allow such behaviour.

0
Comment actions Permalink

Thanks for your response. I'm still pretty new to Kubernetes so I was hoping you could guide me some more. In TeamCity, when I add an Agent image to the Kubernetes cloud provider, I can choose "Simply run single container", or "Use a custom pod template". I'm guessing I have to create my own pod template, based on the jetbrains/teamcity-agent image, with the privileged mode specification, is this right?

1
Comment actions Permalink

Is there any documentation on this? I'm not sure how I'm supposed to get the pod to have a unique name when I go into cloud profile and spin up a new agent.

0
Comment actions Permalink

TeamCity will automatically generate a unique pod name. You can setup a name prefix, if you'd like.

Not too much of documentation available - https://github.com/JetBrains/teamcity-kubernetes-plugin/

0
Comment actions Permalink

Cool, I got it to work. I also had to add the environment variable DOCKER_IN_DOCKER=start

0
Comment actions Permalink

@Kmenzel can you please share your pod yaml definition?

0
Comment actions Permalink

I don't know if all of the stuff in the spec needs to be in there, but here is what I have...

 

apiVersion: v1
kind: Pod
metadata:
name: jetbrains-teamcity-agent-1
namespace: teamcity-system
spec:
containers:
- env:
- name: TEAMCITY_KUBERNETES_SERVER_URL
value: http://teamcity:8111
- name: SERVER_URL
value: http://teamcity:8111
- name: TEAMCITY_KUBERNETES_IMAGE_NAME
value: "1"
- name: TEAMCITY_KUBERNETES_CLOUD_PROFILE_ID
value: kube-1
- name: TEAMCITY_KUBERNETES_INSTANCE_NAME
value: jetbrains-teamcity-agent-1
- name: DOCKER_IN_DOCKER
value: start
image: jetbrains/teamcity-agent
imagePullPolicy: IfNotPresent
name: jetbrains-teamcity-agent-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-q5tsg
readOnly: true
securityContext:
privileged: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: node-04
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-q5tsg
secret:
defaultMode: 420
secretName: default-token-q5tsg

0
Comment actions Permalink

I want to ask Do you choose "use template custom pod" or "use pod template form deployment "?what did you set next?@Sergey Pak @Kmenzel thanks

0
Comment actions Permalink

I had tried like this ,but buildagent@jetbrains-teamcity-agent-380:/$ service docker status
* Docker is not running

buildagent@jetbrains-teamcity-agent-380:/$ service docker restart
* Docker must be run as root
buildagent@jetbrains-teamcity-agent-380:/$ sudo service docker restart
[sudo] password for buildagent:
Sorry, try again.
[sudo] password for buildagent:
Sorry, try again.
[sudo] password for buildagent:
sudo: 3 incorrect password attempts

0
Comment actions Permalink

Hi, since 2020.1 TC agent containers run as non-root user (due to security reasons). To be able to use docker you need to explicitly set user to root (-u 0).

Also, there's a related issue. New tag will be deployed soon (https://youtrack.jetbrains.com/issue/TW-66322).

to start docker in the container you need to supply env var

DOCKER_IN_DOCKER=start

 

0
Comment actions Permalink

thanks this isimportant 

securityContext:
fsGroup: 1000
runAsUser: 0

0 stand for root user.

@Sergey Pak

0
Comment actions Permalink

when deploying k8s,another problem, free community version teamcity only has 3 agents, a build in one agent ? one agent correspond to a pod? so if I want to build more at the same time in k8s, I have to buy more agents?@Sergey Pak

0
Comment actions Permalink

One agent corresponds to one running build agent app, which is one per pod, so yes.

 

0
Comment actions Permalink

thanks,now I have set

  image: jetbrains/teamcity-agent
imagePullPolicy: IfNotPresent
name: jetbrains-teamcity-agent-381
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/docker.sock
name: test-volume
dnsPolicy: ClusterFirst
enableServiceLinks: true
securityContext:
runAsUser: 0
fsGroup: 1000
schedulerName: default-scheduler
serviceAccount: default
serviceAccountName: default
volumes:
- name: test-volume
hostPath:
path: /var/run/docker.sock

to use the same docker inside and outside pods.but docker run -v ~~mount the directory is outside the pods Do you have good idea?@Sergey Pak

0
Comment actions Permalink

Hi everyone,

we've also just upgraded to 2020.1 and I would like to make use of the new approach running the TC Agent that is deployed in our Kubernetes environment as a non-root user. We are using our individual Dockerfile that uses the teamcityagent base image to install a bunch of things.
We would also like to use Docker from within the Teamcity agent and we are facing a similar issue as already mentioned in this thread during the startup of the agent:

"* Docker must be run as root"

We have been following the upgrade notes here:
https://www.jetbrains.com/help/teamcity/upgrade-notes.html#UpgradeNotes-AgentDockerimagesrunundernon-rootuser

As said above by 759587231 I had to set the runAsUser: 0 setting in the yaml to enable the Docker service again inside the Teamcity agent.

Is my understanding correct that if we need to use Docker inside the Teamcity agent we need to run it again entirely as root user and not as "buildagent" user?

Sergey Pak

Thanks in advance for some hints,
Thomas

 

 

 

 

0

Please sign in to leave a comment.