Unmet requirements: docker.server.osType contains linux
Hi,
I have a Linux (Ubu 18.04) based TC installation with 2018.2.4 (build 61678).
docker and docker-compose is installed locally:
root@sd7:~# docker -v
Docker version 18.09.2, build 6247962
root@sd7:~# docker-compose -v
docker-compose version 1.17.1, build unknown
On the agent I see the following variables:
docker.version | 18.09.2 |
---|---|
dockerCompose.version | 1.17.1 |
Thus, the agent is uncompatible / unmet:
Unmet requirements:
docker.server.osType contains linux
I can't understand this, other related posts are not identical / because of other issues.
Thanks for help,
Matthias
|
Please sign in to leave a comment.
Hi,
the issue with "docker.server.osType" not showing up usually means that the docker command run from the agent cannot connect with the docker daemon running. This is usually due to a lack of permissions, as docker by default only allows connections from root and users of the group docker: https://docs.docker.com/install/linux/linux-postinstall/
Please make sure to set the appropriate configuration (the user running the build agent is a member of the dockergroup). Then, restart the agent. That should make it work. If it doesn't you might need to restart the session to ensure that the group changes take effect.
Thanks, that was it!
(P.S.: Didn't find any TC docu about all these nasty prerequisites for docker support, e.g. also how to configure the best way things like registry logins etc.).
I've added a comment to document it in an issue we had here about it: https://youtrack.jetbrains.net/issue/TW-52609 to try and improve our documentation about it.
Hi Dushant,
first of all, please don't post in multiple posts with a different question. It creates extra workload for us and it's not going to speed up any response.
Secondly, the likely answer to your problem is the first answer I provided in this same thread, and is very likely actually related to your docker installation and not to teamcity in itself. Please test it out.
We have many different teamcity servers running and have seen this error occur in two instances which is a small percentage of instances and it does not appear to be deterministic. The most recent occurrence was resolved by manually starting an agent and that somehow resulted in many other agents being created and builds running. No configuration was changed for the build agents.
Could you provide more details on the behavior you encountered? I will see if it is related or not.
In general, such issues are related to the Docker configuration, as Denis mentioned, and should be resolved by following the recommendations he provided.
Best regards,
Anton
Hi Anton Vakhtel , Denis Lapuente ,
We are running multiple Teamcity server instances and their respective agents in a few k8s clusters. And we've seen some of the agents not started to process the build queue because of the mentioned error
Unmet requirements: docker.server.osType exists
.When we manually start an agent, it seems to clear out this error and able to detect the docker installation and os when it register to the server, and then other agents are started to process the queue.
We haven't changed the agent pod definition/k8s manifests so we are not sure why it would suddenly go into such a state where it doesn't know about the docker server os type.
Our server is using the image:
jetbrains/teamcity-agent:2024.03
and our agents are using the image:
jetbrains/teamcity-agent:2024.03-linux-sudo
Please let us know if you require additional information.
Justin
Hi Justin,
we have seen in the past changes to kubernetes where they broke behavior that used to work before, so I'd suspect that's what's going on. As mentioned throughtout the thread, the issue means that TC can't connect to the docker daemon. If kubernetes is doing something to the agent on start and they have changed how that happens, that would explain the behavior.
What k8s setup are you running? Is it on some cloud provider or is your own internal setup? Which versions?
Hi Denis - We've created Request #6596154 so that we could go into more detail about our setup.