kubectl context switching
Answered
Hi all,
I'm having some issues setting my Kubernetes context for an AWS EKS cluster and hoping someone may have run into this before...
I have a Command Line custom script that's executing the following commands:
```
# Log into AWS.
aws configure set default.region %env.AWS_REGION%
aws configure set aws_access_key_id %env.AWS_ACCESS_KEY%
aws configure set aws_secret_access_key %env.AWS_SECRET_KEY%
# Set the kubectl context.
aws eks --region %env.AWS_REGION% update-kubeconfig --name %env.KUBERNETES_CLUSTER_NAME%
# Test the Kubernetes context is set correctly.
kubectl config current-context
kubectl config current-context
```
In the agent build logs I see the following:
```
Updated context arn:aws:eks:eu-central-1:[ACCOUNT_NUMBER]:cluster/[CLUSTER_NAME] in /root/.kube/config
```
But on the very next line (for the test command), I see the following:
```
error: current-context is not set
```
When I hope onto the box, I can run the same commands as my SSH user and see that the context has been set correctly and I'm able to perform kubect apply commands to the cluster successfully.
I'm running my TeamCity agent via systemctl and am running TeamCity Professional 2020.1 (build 78475).
Cheers,
Rob
Please sign in to leave a comment.
Fixed.
My build agent was running as root, which meant the .kube/config file was being stored in /root/.kube/config and not ~/.kube/config, and ensuring the KUBECONFIG env var was set to /root/.kube/config sorted things for me.