Cannot login to docker.io - config file is a directory

Hi everyone,

we have a strange problem with login into dockerhub on our agents.
We are running Teamcity version 2024.03.2 with cloud agents with image version: teamcity-agent:2024.03.2-linux-sudo in an EKS cluster.
The build is a dockerized build which uses an amazoncorretto:17 container (Gradle runner in amazoncorretto:17 container).

Randomly, and not reproducable, the build fails with following log output, right at the beginning:

Cannot login to registry docker.io
An error occurred while executing 'docker login -u "xxxxxxxxxxx" --password-stdin ':
WARNING: Error loading config file: /root/.docker/config.json: read /root/.docker/config.json: is a directory
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Error saving credentials: rename /root/.docker/config.json3665204406 /root/.docker/config.json: file exists

This happens before the build container is started, at the very beginning of the build.
Does anyone know what could be the problem?

If it would be a configuration problem, I would expect it to fail every time, but as said, it fails unpredictably…

Thanks and best regards,

Till

0
8 comments
Official comment

Till Woerner Hi, could you please share the full build log as a first step? You can upload it with https://uploads.jetbrains.com/ and share the upload ID to share it privately.

Marco Klöhn Hi, it looks like you edited your comment. Is it working and resolved for you? Thanks!

Best regards
Anton

Hello,

It works again, i updated my agents.

i got this error when building with a ARM agent:

Problem while listening for container events: Cannot run program "/bin/sh": error=0, Failed to exec spawn helper: pid: 905027, exit value: 1
Cannot login to registry docker.io
An error occurred while executing 'docker login -u "xxxxxxxx" --password-stdin ':

All my x86 agents are working.

Greetings

Marco

0

Anton Vakhtel Yeah everything works fine now. Thanks for the reply

0

Hi Anton,

I uploaded a zipped build log:

Upload id: 2024_06_19_2ANHroYrN5zM2sTH5KmWRk

Thanks for looking into this, if you need any additional information, I would be happy to help :)

Best regards,

Till

0

And something else I can add, we seem to have found a workaround which is not really nice but seems to have worked for the last few days. We did not have any failed builds in the recent 2 days.

On every start of an agent pod, we just create an nearly empty file with an empty auths section in the poststart pod lifecycle event, because the error message in the log somehow implies, that the file is not there when it is mounted and therefor is mounted as a directory  which then leads to errors accessing it.

Somehow this seems to me as if there would be some kind of timing issue when starting the agents, mounting the file  and creating the authentication configuration from the DockerHub connection. 

Best regards,
Till

0
Dear Till,

May I ask for the deployment configuration of the Agent? If the complete configuration may not be shared, for investigation purposes, we would like to understand whether Docker-in-Docker or Docker-out-of-Docker cases are used and which folders were mounted as volumes.
You can share the configuration the same way as before; it is accessible only to JetBrains.

Best regards,
Anton
0

Hi Anton,

I am not 100% sure what you need, please get back to me if I can provide more information, currently I just uploaded only the K8s deployment description we use in Teamcity to start the agents. I put some more information in this post below.
The upload ID is: 2024_06_21_H6YHQrr1MdndC5Ce7d5Qma

Please be aware, that this deployment already contains the workaround, which has been working for us in the last couple of days (it is the postStart lifecycle command, which fixed it for us.

This error occurred when using Docker-in-Docker. Basically it means that the official agent image runs on the node.
The build then uses a docker connection and build feature to start a build image, let's say Amazon Corretto image.
Inside this build image we again start one or more containers and for that, the docker login should be available to pull these images, so we are not throttled.

The build step passes these mounts to the build image:

-v /var/run/docker.sock:/var/run/docker.sock
-e HOME=/root
-v $HOME/.docker/config.json:/root/.docker/config.json 

We tried without, but then the build image has no dockerhub access.

I hope this helps somehow, please contact me anytime, if you need more information :)

Best regards,

Till

0
Hi Till,

We looked into the described issue closely with the development team, and as of now, we concluded the following:
• Summary: an issue with `docker login` - inability to update `$HOME/.docker/config.json` because the file exists. It's internment and was observed in the EKS, Docker-in-Docker case.
• The `$HOME/.docker` is the directory managed and used by Docker itself. TeamCity does not interact with it directly. Given that we do not (and probably should not) change this directory and similar discussions found on Docker forums (unfortunately, we also couldn't find any applicable potential causes in these discussions), we conclude we cannot do anything to prevent that from happening at the TeamCity level.
Our current suggestion is to continue using the workaround you have in place. If I have any new information, I will update this post, but as of now, it seems there is nothing we can do from TeamCity's side.

Best regards,
Anton
0

Please sign in to leave a comment.