I am sure issue is resolved but I will be putting more information here so if any other people are still facing the issue related to any of the below setup then they can use the steps below.
When we create the EKS cluster by any method via CloudFormation/CLI/EKSCTL the IAM role/user who created the cluster will automatically binded to the default kubernetes RBAC API group system:masters
(https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and in this way creator of the cluster will get the admin access to the cluster. Although we can always give the access to other IAM user/role using the aws-auth file but for that we must have to use the IAM user/role who created the cluster.
To verify the role/user for the EKS cluster we can search for the CreateCluster"
Api call on cloudtrail and it will tell us the creator of the cluster in the sessionIssuer
section for field arn
(https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html).
When we create the cluster using the IAM role or IAM user, setting up the access for the EKS cluster will become little tricky when we created the cluster using the role compare to user.
I will put the steps we can follow for each different method while setting up the access to EKS cluster.
Scenario-1: Cluster was Created using the IAM user (For example «eks-user»)
Confirm that IAM user credentials are set properly on AWS cli who has created the cluster via running the command aws sts get-caller-identity
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
Scenario-2: Cluster was Created using the IAM Role (For example «eks-role»)
Mainly there are four different way to setup the access via cli when cluster was created via IAM role.
1. Setting up the role directly in kubeconfig file.
In this case we do not have to make any assume role api call via cli manually, before running kubectl command because that will be automatically done by aws/aws-iam-authenticator
set in the kube config file.
Lets say now we are trying to setup the access for the user eks-user
the first make sure that user does have permission to assume the role eks-role
Add the assume role permission to the eks-user
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::xxxxxxxxxxx:role/eks-role"
}
]
}
Edit the trust relationship on the role so that it will allow the eks-user
to assume the role.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
},
"Action": "sts:AssumeRole"
}
]
}
Confirm that IAM user credentials are set properly on AWS cli who has created the cluster via running the command aws sts get-caller-identity
. Important thing to remember it should show us the IAM user ARN not the IAM assumed ROLE ARN.
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn arn:aws:iam::xxxxxxxxxxx:user/eks-role
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
- --role
- arn:aws:iam::xxxxxxx:role/eks-role
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
2. If you have setup the AWS profile (https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) on CLI and if you want to use that with the kube config.
Confirm that profile is set properly so that it can use the credentials for the eks-user
$ cat ~/.aws/config
[default]
output = json
region = us-east-1
[eks]
output = json
region = us-east-1
[profile adminrole]
role_arn = arn:aws:iam::############:role/eks-role
source_profile = eks
$ cat ~/.aws/credentials
[default]
aws_access_key_id = xxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[eks]
aws_access_key_id = xxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Once this profile configuration is done please confirm that profile configuration is fine by running the command aws sts get-caller-identity --profile eks
$ aws sts get-caller-identity --profile eks
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/eks-user"
}
After that update the kubeconfig file using the below command with the profile and please make sure we are not using the role here.
aws eks update-kubeconfig --name devel --profile eks
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
env:
- name: AWS_PROFILE
value: eks
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
3. Assume the role by any other way, For example we can attach the IAM role to the instance directly.
If role is directly attached to the instance profile then we can follow the similar steps as we followed while setting up the access for IAM user in Scenario-1
Verify that we have attached the correct role to EC2 instance and as this instance profile will come into least precedence, this step will also verify that there are no any other credentials setup on the instnace.
[ec2-user@ip-xx-xxx-xx-252 ~]$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxxxxxxxxx:i-xxxxxxxxxxx",
"Arn": "arn:aws:sts::xxxxxxxxxxxx:assumed-role/eks-role/i-xxxxxxxxxxx"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
4. Manually assuming the IAM role via aws sts assume-role
command.
Assume the role eks-role
manually by running the cli command.
aws sts assume-role --role-arn arn:aws:iam::xxxxxxxxxxx:role/eks-role --role-session-name test
{
"AssumedRoleUser": {
"AssumedRoleId": "xxxxxxxxxxxxxxxxxxxx:test",
"Arn": "arn:aws:sts::xxxxxxxxxxx:assumed-role/eks-role/test"
},
"Credentials": {
"SecretAccessKey": "xxxxxxxxxx",
"SessionToken": xxxxxxxxxxx",
"Expiration": "xxxxxxxxx",
"AccessKeyId": "xxxxxxxxxx"
}
}
After that set the required environment variable using the value from above output so that we can use the correct credentials generated from the session.
export AWS_ACCESS_KEY_ID=xxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxx
export AWS_SESSION_TOKEN=xxxxxxxxxx
After that verify that we assumed the IAM role by running the command aws sts get-caller-identity
.
aws sts get-caller-identity
{
"Account": "xxxxxxxxxx",
"UserId": "xxxxxxxxxx:test",
"Arn": "arn:aws:sts::xxxxxxxxxx:assumed-role/eks-role/test"
}
After that update the kubeconfig file using the below command
aws eks --region region-code update-kubeconfig --name cluster_name
Attaching the config file how it looks like once updated via above command. Please do not directly edit this file until and unless necessary.
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT
server: https://xxxxxxx.sk1.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
contexts:
- context:
cluster: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
current-context: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:xxxxxxx:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- eks-cluster
command: aws
Once above setup is done you should be able to run the kubectl command.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP xxx.xx.x.x <none> 443/TCP 12d
NOTE:
I have try to cover major use case here but there might be other use case too where we need to setup the access to the cluster.
Also the above tests are mainly aiming at the first time setup of the EKS cluster and none of the above method is touching the aws-auth configmap yet.
But once you have given access to other IAM user/role to EKS cluster via aws-auth (https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) file you can use the same set of commands for those users too as mentioned in above answer.
Update: If you are using the SSO then setup will be preety much same but one thing we have to consider is either in case of SSO or while using the role directly if we are trying to update path based role in ConfigMap then we have to get rid of the paths in role for example instead arn:aws:iam::xxxxxxxxxxx:role/path-1/subpath-1/eks-role
of this use arn:aws:iam::xxxxxxxxxxx:role/eks-role
so basically we are getting rid of the /path-1/subpath-1
becuase when we run kubectl command it will first make AssumeRole
api call and if we see assumed role ARN then it will not contians the path so if we include the path then it EKS will deny those requests.
Содержание
- kubectl error: You must be logged in to the server (Unauthorized) – how to fix
- How do I resolve an unauthorized server error when I connect to the Amazon EKS API server?
- Short description
- Resolution
- You’re the cluster creator
- You’re not the cluster creator
- You’re the user or role that received the error
- error: kubectl You must be logged in to the server (Unauthorized) — pointed to root users certs #4353
- Comments
- error: You must be logged in to the server (Unauthorized) #156
- Comments
- RBAC AAD access error. You must be logged in to the server (Unauthorized). AKS 1.10.3 #478
- Comments
kubectl error: You must be logged in to the server (Unauthorized) – how to fix
This error occurs when the kubectl client does not have the correct certificates to interact with the Kubernetes API Server. Every certificate has an expiry date. Kubernetes has mechanisms to update the certificate automatically.
I was getting the error “You must be logged in to the server (Unauthorized)“. While executing the kubectl command. The command was working perfectly in the cluster before few hours and there was no modifications happened in the cluster.
You can use the following command to check the expiry details of the certificates used internally in the Kubernetes cluster. If the certificates are expired, we need to renew the certificates.
A sample response is given below.
From the above screenshot, In the above screenshot certificates will expire in 6 hours. If you see an expiry of the certificates, you can renew the certificates by issuing the following command.
Note: Take the back up of all the old certs and config file as a safety precaution
The sample response while executing the above command is given below.
Now you can check the expiry date of the certificates and verify whether everything got updated.
Also execute some kubectl command to ensure that the kubectl got the right config file to interact with the cluster.
Sample commands are given below.
If you are again getting the error You must be logged in to the server (Unauthorized), try the following hack.
Login to the master node, copy the config file /etc/kubernetes/admin.conf and paste it in the location $HOME/.kube/config.
The command is given below.
After doing this, try executing kubectl commands. You can copy this config file to any node where you have kubectl
Источник
Last updated: 2022-09-20
I’m using kubectl commands to connect to the Amazon Elastic Kubernetes Service (Amazon EKS) API server. I received the message «error: You must be logged in to the server (Unauthorized)». How do I resolve this?
Short description
The cluster admin must complete the steps in one of the following sections:
- You’re the cluster creator
- You’re not the cluster creator
Then, the person who received the error must complete the steps in the You’re the user or role that received the error section.
Resolution
You’re the cluster creator
Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.
1. To see the configuration of your AWS CLI user or role, run the following command:
The output returns the Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) user or role. For example:
2. Confirm that the ARN matches the cluster creator. To find the cluster creator, you can run the following command in Amazon CloudWatch Application Insights. Note: Amazon EKS maps the cluster creator IAM role on the control plane side as kubernetes-admin. If API server logging was activated when the cluster was created, then the entity creator can be queried.
This query returns the IAM entity that is mapped as the cluster creator. Assume the IAM entity role that you receive in the output, and then make the kubectl calls to the cluster again.
3. Update or generate the kubeconfig file using one of the following commands.
As the IAM user, run the following command:
Note: Replace eks-cluster-name with your cluster name. Replace aws-region with your AWS Region.
As the IAM role, run the following command:
Note: Replace eks-cluster-name with your cluster name. Replace aws-region with your AWS Region.
4. To confirm that the kubeconfig file is updated, run the following command:
5. To confirm that your IAM user or role is authenticated, run the following command:
You’re not the cluster creator
1. To get the configuration of your AWS CLI user or role, run the following command:
The output returns the ARN of the IAM user or role.
2. Ask the cluster owner or admin to add your IAM user or role to aws-auth ConfigMap.
Note: If you have the correct IAM permissions, then you can use AssumeRole to log in as the cluster creator.
3. To edit aws-auth ConfigMap in a text editor, the cluster owner or admin must run the following command:
Note: If you receive the error «Error from server (NotFound): configmaps «aws-auth» not found», then follow the instructions to apply the aws-authConfigMap to your cluster.
4. To add an IAM user or IAM role, complete one of the following steps.
Add the IAM user to mapUsers. For example:
Note: Replace testuser with your user name.
Add the IAM role to mapRoles. For example:
Note: Replace testrole with your role.
The value for username in the mapRoles section accepts lower-case characters only. The IAM role must be mapped without the path. To learn more about rolearn path requirements, expand the aws-auth ConfigMap does not grant access to the cluster section in Troubleshooting IAM.
To specify rolearn for an AWS IAM Identity Center (successor to AWS Single Sign-On) IAM role, remove the path ‘/aws-reserved/sso.amazonaws.com/REGION’ from the Role ARN. Otherwise, the entry in the ConfigMap can’t authorize you as a valid user.
The system:masters group allows superuser access to perform any action on any resource. For more information, see Default roles and role bindings. To restrict access for this user, you can create an Amazon EKS role and role binding resource. For an example of restricted access for users viewing resources in the Amazon EKS console, follow steps 2 and 3 in Required permissions.
You’re the user or role that received the error
1. To update or generate the kubeconfig file after aws-auth ConfigMap is updated, run either of the following commands.
As the IAM user, run the following command:
Note: Replace eks-cluster-name with your cluster name. Replace aws-region with your AWS Region.
2. As the IAM role, run the following command:
Note: Replace eks-cluster-name with your cluster name. Replace aws-region with your AWS Region.
3. To confirm that the kubeconfig file is updated, run the following command:
4. To confirm that your IAM user or role is authenticated, run the following command:
Note: If you continue to receive errors, then review the troubleshooting guidelines Using RBAC authorization.
Источник
error: kubectl You must be logged in to the server (Unauthorized) — pointed to root users certs #4353
So, I installed minikube version minikube version: v1.1.0 on Ubuntu 18.04 and having some trouble querying the resources using kubectl .
Below are some warnings that I get while starting minikube , so minikube start returns
and if I run kubectl get pods below is the error that I get
below are some of the line of minikube logs
The text was updated successfully, but these errors were encountered:
Do you mind sharing the output of:
- minikube status
- kubectl version
- minikube kubectl get pods — -n kube-system
- kubectl config view —minify | grep /.minikube | xargs stat
I’m a bit surprised to not see apiserver logs in your minikube logs output. If minikube logs does in fact output them, please include it.
My best current theories are a kubectl version mismatch, or a kubectl config that isn’t pointing at the right context somehow.
output of minikube status
minikube kubectl get pods — -n kube-system
kubectl config view —minify | grep /.minikube | xargs stat
Output of apiserver are being generated, I just shared head of minikube logs
Here are some of the line from apiserver logs
The problem here is that the certificate files mentioned in your kube config do not exist! Is it possible that you are running minikube start and kubectl as different users?
I strongly recommend never running minikube or kubectl as root.
Is it possible that you are running minikube start and kubectl as different users? I strongly recommend never running minikube or kubectl as root.
The root cause here is that the user you are using kubectl as (root?) has an entry in
/.kube/config that points the minikube cluster to certificate files that do not exist. To workaround this, I suggest running:
minikube delete
kubectl config delete-context minikube
The next minikube start should then place a valid configuration in
Источник
error: You must be logged in to the server (Unauthorized) #156
I followed the Keycloak documentation, but cant really seem to make it work.
Keycloak is setup as pr. the docs, and when I run below command, it looks like I’m getting the response that I should.
kube-api is configured.
I applied below to kubernetes.
And added below to my kubeconfig file, which I have exported with export KUBECONFIG=./kubeconfig
It generates a temp file at
/.kube/cache/oidc-login/d721553ba91f6078f86a5cb2caa2f78eb4d27898b238dfad310b87f01ecdd117 with what looks like correct content.
But when i try and execute kubectl commands I just get:
What am I missing here ?
The text was updated successfully, but these errors were encountered:
It seems the kube-apiserver does not accept a token. Would you check the log of kube-apiserver?
Some message should appear like:
@Kerwood what is the kube-apiserver version ? i have the same problem and i am using 1.16.1
Just redeploy (kubeadm) with 1.15.4, same issue.
I get same issue with 1.14.8 (kops) at first. But I found what’s wrong with my settings.
- if you have —oidc-username-claim=email in kubeapiserver, you will need add — —oidc-extra-scope=email in kubelogin args.
my finial working configuration looks like this
@c4po I’m using KOPS 1.14.6 and Google for IDP but this configs doesn’t work for me. What’s your google settings exactly? Only you created OAuth Client ID or anything? Also, did you rolling-update the cluster after adding kubeAPIServer config?
@hbceylan just google oauth clientid and rolling-update the cluster after the config change.
I am also facing the same issue and here are my commands.
——- user context created
——— Versions
minikube — 1.5.2
kubelogin — 1.15.0
kubectl — 1.16.0
kubernetes — 1.16.0
when I try to list the pods from this user, I get below error :
E1210 05:33:11.849924 1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, oidc: parse username claims «email»: claim not present]
I tried restarting minikube as well, but no luck.
However, when I remove ’email’ from the commands, it works and logs in as User «https://accounts.google.com#sub».
It seems some bug is hit ! Or may be my fault, configuring kubelogin.
******* Update *******
I tried running below command and got JWT token which I decoded on jwt.io . And surprisingly there is no email and profile details in the response.
@TheRum I’ve prepared a blog post for this might be helpful
Thanks @hbceylan for the article.
But, It doesn’t help solving the issue I’m facing.
You can dump the claims of token by passing -v1 option to kubelogin.
You can dump the claims of token by passing -v1 option to kubelogin.
➜ kubectl —user=oidc get nodes -v1
I0420 22:25:53.002248 67152 shortcut.go:89] Error loading discovery information: Unauthorized
error: You must be logged in to the server (Unauthorized)
kubeAPIServer:
oidcIssuerURL: https://accounts.google.com
oidcClientID: xxx.apps.googleusercontent.com
oidcUsernameClaim: e
This fixed our issue.
Hi Everyone!
I had the same original issue, i’m using the authentication with the IDP Keycloak. The authentication by the browser is working but i receive the below message (log level 1) from the kubectl —user=oidc get nodes command.
I0923 17:16:30.416277 35800 get_token.go:107] you already have a valid token until 2021-09-23 17:21:28 +0200 CEST
I0923 17:16:30.416287 35800 get_token.go:114] writing the token to client-go
error: You must be logged in to the server (Unauthorized)
From the Kubernetes API pod, the error is the same as explained by @int128
1 authentication.go:53] Unable to authenticate the request due to an error: invalid bearer token
The result of the kubectl oidc-login setup command is returning the token completed.
Kubernetes version 1.19.6
Deployed by Kubespray
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
— oidc-login
— get-token
— —oidc-issuer-url=https://keycloak.localdomain.lan/auth/realms/Kubernetes
— —oidc-client-id=kubernetes
— —oidc-client-secret=SECRETID
— —insecure-skip-tls-verify
— -v1
command: kubectl
env: null
provideClusterInfo: false
Does someone already find the issue and solved it please ?
Thanks in advance for any help!!
Источник
RBAC AAD access error. You must be logged in to the server (Unauthorized). AKS 1.10.3 #478
After significant hours invested in trying, I’m unable to access cluster resources under my AAD account (thus as a non-admin user) when RBAC is enabled.
I’ve followed & re-followed the steps to create a cluster with RBAC / AAD as found here: https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/aad.md
The only different path I took is I wanted to create my cluster using the Resource Templates rather than via the Azure CLI. Thus I used the «2018-03-31» template and set enableRBAC=true and provided a AADProfile section nested within properties: <. >. My cluster was created successfully with the template — using Kubernetes version 1.10.3.
Now as it stands, when I connect to my cluster (as non-admin) then I am requested to authenticate at https://microsoft.com/devicelogin and upon doing so the website confirms I have authenticated with my AAD Client (as setup as a native App Registration in my Azure AD).
However, once my cli updates itself, I’m presented with the message: «You must be logged in to the server (Unauthorized)».
It might be of interest to note that while logged in as admin, if I try kubectl get pods —as=MyUserName then the command works. And If I run kubectl auth can-i get pods —as=myUserName then it responds with a ‘yes’. These repsonses very much contradict what I witness when I actually try to interact with my cluster under my own credentials.
FYI I’ve tried created cluster role bindings for both AAD Groups and just for a single user with the same outcome. An example binding I’ve applied is:
I’ve tried both username and e-mail in the name field. Along with AD Group names with ‘kind’ set to ‘Group’
The text was updated successfully, but these errors were encountered:
Источник
This error occurs when the kubectl client does not have the correct certificates to interact with the Kubernetes API Server. Every certificate has an expiry date. Kubernetes has mechanisms to update the certificate automatically.
I was getting the error “You must be logged in to the server (Unauthorized)“. While executing the kubectl command. The command was working perfectly in the cluster before few hours and there was no modifications happened in the cluster.
You can use the following command to check the expiry details of the certificates used internally in the Kubernetes cluster. If the certificates are expired, we need to renew the certificates.
kubeadm alpha certs check-expiration
A sample response is given below.
From the above screenshot, In the above screenshot certificates will expire in 6 hours. If you see an expiry of the certificates, you can renew the certificates by issuing the following command.
Note: Take the back up of all the old certs and config file as a safety precaution
kubeadm alpha certs renew all
The sample response while executing the above command is given below.
Now you can check the expiry date of the certificates and verify whether everything got updated.
kubeadm alpha certs check-expiration
Also execute some kubectl command to ensure that the kubectl got the right config file to interact with the cluster.
Sample commands are given below.
kubectl get pods --all-namespaces kubectl get nodes
If you are again getting the error You must be logged in to the server (Unauthorized), try the following hack.
Login to the master node, copy the config file /etc/kubernetes/admin.conf and paste it in the location $HOME/.kube/config.
The command is given below.
cp /etc/kubernetes/admin.conf $HOME/.kube/config
After doing this, try executing kubectl commands. You can copy this config file to any node where you have kubectl
output of minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
minikube kubectl get pods -- -n kube-system
💾 Downloading kubectl v1.14.2
error: You must be logged in to the server (Unauthorized)
kubectl config view --minify | grep /.minikube | xargs stat
stat: cannot stat 'certificate-authority:': No such file or directory
File: /root/.minikube/ca.crt
Size: 1066 Blocks: 8 IO Block: 4096 regular file
Device: 801h/2049d Inode: 6684807 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-05-25 01:15:37.604230741 +0530
Modify: 2019-05-23 23:04:49.309566757 +0530
Change: 2019-05-23 23:04:49.309566757 +0530
Birth: -
stat: cannot stat 'client-certificate:': No such file or directory
File: /root/.minikube/client.crt
Size: 1103 Blocks: 8 IO Block: 4096 regular file
Device: 801h/2049d Inode: 6684811 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-05-25 01:17:19.461884004 +0530
Modify: 2019-05-25 01:15:38.732249150 +0530
Change: 2019-05-25 01:15:38.732249150 +0530
Birth: -
stat: cannot stat 'client-key:': No such file or directory
File: /root/.minikube/client.key
Size: 1675 Blocks: 8 IO Block: 4096 regular file
Device: 801h/2049d Inode: 6684812 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-05-25 01:17:19.461884004 +0530
Modify: 2019-05-25 01:15:38.732249150 +0530
Change: 2019-05-25 01:15:38.732249150 +0530
Birth: -
Output of apiserver are being generated, I just shared head of minikube logs
Here are some of the line from apiserver logs
==> kube-apiserver <==
Trace[1885061068]: [733.4152ms] [732.054159ms] Object stored in database
I0524 19:48:02.324521 1 trace.go:81] Trace[672210247]: "Create /api/v1/namespaces/kube-system/pods" (started: 2019-05-24 19:48:01.755375318 +0000 UTC m=+80.041739733) (total time: 569.110394ms):
Trace[672210247]: [567.867905ms] [566.868372ms] Object stored in database
I0524 19:48:02.326896 1 trace.go:81] Trace[896143369]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-05-24 19:48:01.701720389 +0000 UTC m=+79.988084858) (total time: 625.14946ms):
Trace[896143369]: [625.053027ms] [624.914819ms] About to write a response
I0524 19:48:02.331673 1 trace.go:81] Trace[762931299]: "List /api/v1/namespaces/kube-system/secrets" (started: 2019-05-24 19:48:01.588537165 +0000 UTC m=+79.874901460) (total time: 743.083552ms):
Trace[762931299]: [742.749379ms] [742.669661ms] Listing from storage done
I0524 19:48:04.128846 1 trace.go:81] Trace[1718891657]: "Get /api/v1/namespaces/default" (started: 2019-05-24 19:48:03.287228669 +0000 UTC m=+81.573592997) (total time: 841.582842ms):
Trace[1718891657]: [841.400491ms] [841.379489ms] About to write a response
I0524 19:48:04.779476 1 trace.go:81] Trace[996696631]: "List /apis/batch/v1/jobs" (started: 2019-05-24 19:48:04.232561989 +0000 UTC m=+82.518926326) (total time: 546.89131ms):
How do I resolve the error «You must be logged in to the server (Unauthorized)» when I connect to the Amazon EKS API server?
Last updated: 2023-01-26
I’m using kubectl commands to connect to the Amazon Elastic Kubernetes Service (Amazon EKS) application programming interface (API) server. I received the message «error: You must be logged in to the server (Unauthorized)».
Short description
You get this error when the AWS Identity and Access Management (IAM) entity that’s configured in kubectl isn’t authenticated by Amazon EKS.
You are authenticated and authorized to access your Amazon EKS cluster based on the IAM entity (user or role) that you use. Therefore, be sure of the following:
- You configured kubectl tool to use your IAM user or role.
- Your IAM entity is mapped to the aws-auth ConfigMap.
To resolve this issue, you must complete the steps in one of the following sections based on your use case:
- You’re the cluster creator
- You’re not the cluster creator
Resolution
You’re the cluster creator
You’re the cluster creator if your IAM entity was used to create the Amazon EKS cluster.
1. Run the following query in Amazon CloudWatch Log Insights to identify the cluster creator ARN:
First, select the log group for your Amazon EKS cluster (example: /aws/eks/my-cluster/cluster). Then, run the following query:
fields @logstream, @timestamp, @message
| sort @timestamp desc
| filter @logStream like /authenticator/
| filter @message like "username=kubernetes-admin"
| limit 50
Note: Be sure that you turned on Amazon EKS authenticator logs.
This query returns the IAM entity that’s mapped as the cluster creator:
@message
time="2022-05-26T18:55:30Z" level=info msg="access granted" arn="arn:aws:iam::123456789000:user/testuser" client="127.0.0.1:57586" groups="[system:masters]" method=POST path=/authenticate uid="aws-iam-authenticator:123456789000:AROAFFXXXXXXXXXX" username=kubernetes-admin
2. Be sure that you configured the AWS CLI with the cluster creator IAM entity. To see if the IAM entity is configured for AWS CLI in your shell environment, run the following command:
$ aws sts get-caller-identity
You can also run this command using a specific profile:
$ aws sts get-caller-identity --profile MY-PROFILE
The output returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for AWS CLI.
Example:
{
"UserId": "XXXXXXXXXXXXXXXXXXXXX",
"Account": "XXXXXXXXXXXX",
"Arn": "arn:aws:iam::XXXXXXXXXXXX:user/testuser"
}
Confirm that the IAM entity that’s returned matches the cluster creator IAM entity. If the returned IAM entity isn’t the cluster creator, then update the AWS CLI configuration to use the cluster creator IAM entity.
3. Update or generate the kubeconfig file using the following command:
$ aws eks update-kubeconfig --name eks-cluster-name --region aws-region
Note:
- Replace eks-cluster-name with the name of your cluster.
- Replace aws-region with the name of your AWS Region.
To specify an AWS CLI profile, run the following command:
$ aws eks update-kubeconfig --name eks-cluster-name —region aws-region —profile my-profile
Note:
- Replace eks-cluster-name with the name of your cluster.
- Replace aws-region with the name of your Region.
- Replace my-profile with the name of your profile.
4. To confirm that the kubeconfig file is updated, run the following command:
$ kubectl config view --minify
5. To confirm that your IAM entity is authenticated and that you can access your EKS cluster, run the following command:
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 77d
You’re not the cluster creator
You’re not the cluster creator if your IAM entity wasn’t used to create the cluster. In this case, you must map your IAM entity to the aws-auth ConfigMap to allow access to the cluster.
1. Be sure that you configured the AWS CLI with your IAM entity. To see the IAM entity that’s configured for AWS CLI in your shell environment, run the following command:
$ aws sts get-caller-identity
You can also run this command using a specific profile:
$ aws sts get-caller-identity --profile my-profile
The output returns the ARN of the IAM entity that’s configured for AWS CLI.
Example:
{
"UserId": "XXXXXXXXXXXXXXXXXXXXX",
"Account": "XXXXXXXXXXXX",
"Arn": "arn:aws:iam::XXXXXXXXXXXX:user/testuser"
}
Confirm that the IAM entity that’s returned is your IAM entity. If the returned IAM entity isn’t the one used to interact with your cluster, first update the AWS CLI configuration to use the correct IAM entity. Then, retry accessing your cluster using kubectl. If the issue persists, continue to step 2.
2. If the returned IAM entity isn’t the cluster creator, add your IAM entity to the aws-auth ConfigMap. This allows the IAM entity to access the cluster.
Only the cluster admin can modify aws-auth ConfigMap. Therefore, do either of the following:
- Use the instructions in the You’re cluster creator section to access the cluster using the cluster creator IAM entity.
- Ask the cluster admin to perform this action.
Check if your IAM entity is in the aws-auth ConfigMap by running the following command:
eksctl get iamidentitymapping --cluster cluster-name
-or-
kubectl describe configmap aws-auth -n kube-system
If your IAM entity is in the aws-auth ConfigMap, then you can skip to step 3.
Map your IAM entity automatically by running the following command:
eksctl create iamidentitymapping
--cluster $CLUSTER-NAME
--region $REGION
--arn arn:aws:iam::XXXXXXXXXXXX:user/testuser
--group system:masters
--no-duplicate-arns
--username admin-user1
Or, you can map your IAM entity manually by editing the aws-auth ConfigMap:
$ kubectl edit configmap aws-auth --namespace kube-system
To add an IAM user, add the IAM user ARN to mapUsers.
Example:
mapUsers: |
- userarn: arn:aws:iam::XXXXXXXXXXXX:user/testuser
username: testuser
groups:
- system:masters
To add an IAM role, add the IAM role ARN to mapRoles.
Example:
mapRoles: |
- rolearn: arn:aws:iam::XXXXXXXXXXXX:role/testrole
username: testrole
groups:
- system:masters
Important:
- The IAM role must be mapped without the path. To learn more about rolearn path requirements, expand the aws-auth ConfigMap does not grant access to the cluster section in Troubleshooting IAM.
- To specify rolearn for an AWS IAM Identity Center (successor to AWS Single Sign-On) IAM role, remove the path «/aws-reserved/sso.amazonaws.com/REGION» from the Role ARN. Otherwise, the entry in the ConfigMap can’t authorize you as a valid user.
- The system:masters group allows superuser access to perform any action on any resource. For more information, see Default roles and role bindings. To restrict access for this user, you can create an Amazon EKS role and role binding resource. For an example of restricted access for users viewing resources in the Amazon EKS console, follow steps 2 and 3 in Required permissions.
3. Update or generate the kubeconfig file by running the following command. Be sure that the AWS CLI is configured with your IAM entity that’s returned in step 1.
$ aws eks update-kubeconfig --name eks-cluster-name --region aws-region
Note:
- Replace eks-cluster-name with the name of your cluster.
- Replace aws-region with the name of your AWS Region.
You can also run this command using a specific profile:
$ aws eks update-kubeconfig --name eks-cluster-name —region aws-region —profile my-profile
Note:
- Replace eks-cluster-name with the name of your cluster.
- Replace aws-region with the name of your AWS Region.
- Replace my-profile with the name of your profile.
4. To confirm that the kubeconfig file is updated, run the following command:
$ kubectl config view --minify
5. To confirm that your IAM user or role is authenticated, try to access the cluster again. For example, you can run the following command to confirm that the error is resolved:
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 77d
Additional troubleshooting tips
If the error still persists, use the following troubleshooting tips to identify the issue.
When you run a kubectl command, a request is sent to the Amazon EKS cluster API server. Then, the Amazon EKS authenticator tries to authenticate this request. Therefore, check EKS authenticator logs in CloudWatch to identify the issue.
1. Be sure that you turned on logging for your Amazon EKS cluster.
2. Open CloudWatch Log Insights.
3. Select the log group for your cluster. Example: «/aws/eks/example-cluster/cluster».
4. Run the following query:
fields @timestamp, @message
| filter @logStream like /authenticator/
| sort @timestamp desc
| limit 1000
Identify log lines for the same time interval when you got the error by running kubectl commands. You can find more information about the cause of the error in Amazon EKS authenticator logs.
- If the issue is caused by using the incorrect IAM entity for kubectl, then review the kubectl kubeconfig and AWS CLI configuration. Make sure that you’re using the correct IAM entity. For example, suppose that the logs look similar to the following. This output means that the IAM entity used by kubectl can’t be validated. Be sure that the IAM entity used by kubectl exists in IAM and the entity’s programmatic access is turned on.
time="2022-12-26T20:46:48Z" level=warning msg="access denied" client="127.0.0.1:43440" error="sts getCallerIdentity failed: error from AWS (expected 200, got 403). Body: {"Error":{"Code":"InvalidClientTokenId","Message":"The security token included in the request is invalid.","Type":"Sender"},"RequestId":"a9068247-f1ab-47ef-b1b1-cda46a27be0e"}" method=POST path=/authenticate
- If the issue because your IAM entity isn’t mapped in aws-auth ConfigMap, or is incorrectly mapped, then review the aws-auth ConfigMap. Make sure that the IAM entity is mapped correctly and meets the requirements that are listed in the You’re not cluster creator section. In this case, the EKS authenticator logs look similar to the following:
time="2022-12-28T15:37:19Z" level=warning msg="access denied" arn="arn:aws:iam::XXXXXXXXXX:role/admin-test-role" client="127.0.0.1:33384" error="ARN is not mapped" method=POST path=/authenticate
- If the aws-auth ConfigMap was updated and you lost access to the cluster, you can access the cluster using the IAM entity of the cluster creator. This is because the cluster creator doesn’t need to be mapped in the aws-auth ConfigMap.
- If the cluster creator IAM entity was deleted, first create the same IAM user or role again. Then, access the cluster using this IAM entity by following the steps in You’re the cluster creator section.
- If the cluster creator is an IAM role that was created for an SSO user that was removed, then you can’t create this IAM role again. In this case, reach out to AWS Support for assistance.
Did this article help?
Do you need billing or technical support?
AWS support for Internet Explorer ends on 07/31/2022. Supported browsers are Chrome, Firefox, Edge, and Safari.
Learn more »
if( aicp_can_see_ads() ) {
}
In this post, we will see How To Fix – Error: You must be logged in to the server (Unauthorized) in Kubernetes.
Error: You must be logged in to the server (Unauthorized) in Kubernetes
First thing first, do basic check to verify the IAM user..
if( aicp_can_see_ads() ) {
}
aws sts get-caller-identity
Configure AWS CLI directly with access key and secret key.
Solution 1:
Create the cluster under the same IAM profile that you access from via AWS cli.
Within ~/.aws/credentials, the profile accessing kubectl must match IAM that was used to create the cluster.
if( aicp_can_see_ads() ) {
}
Solution 2:
Edit the ConfigMap to add the IAM userrole to the EKS cluster. Use below command –
kubectl edit -n kube-system configmap/aws-auth
Subsequently you will be granted an editor to map new users.
Create role bound to the kubernetes cluster for the same user as in the ConfigMap. Do using below –
kubectl create clusterrolebinding ops-user-cluster-admin-binding --clusterrole=cluster-admin --user=user1
This grants the cluster-admin a ClusterRole to a user named user1 across the cluster.
if( aicp_can_see_ads() ) {
}
Solution 3:
- Check the IAM user details (who has created the cluster) are set properly on AWS cli. Use below
aws sts get-caller-identity
- Update the kubeconfig file
aws eks --region region-code update-kubeconfig --name cluster1
- Verify the config file
- Run the kubectl
kubectl get svc
Solution 4:
- Set up role in kubeconfig file.
- Add the role permission
if( aicp_can_see_ads() ) {
}
{ "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "arn:aws:iam::xxxxxxxxxxx:role/eks-role" }
- Modify trust relationship to allow the user1 to get the role
{ "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::xxxxxxxxxxx:user/user1" }, "Action": "sts:AssumeRole"
- Confirm IAM user credentials are set properly
if( aicp_can_see_ads() ) {
}
$ aws sts get-caller-identity
- Update the kubeconfig file
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn arn:aws:iam::xxxxxxxxxxx:user/eks-role
- Verify the config file
- Run the kubectl
kubectl get svc
Solution 5:
if( aicp_can_see_ads() ) {
}
- Check if you are using expired keys.
export AWS_ACCESS_KEY_ID="***************" export AWS_SECRET_ACCESS_KEY="*************" export AWS_SESSION_TOKEN="************************"
Hope this helps.
if( aicp_can_see_ads() ) {
}
Other Interesting Reads –
-
How To Create A Kerberos Keytab File ?
-
How To Code a PySpark Cassandra Application ?
-
How To Setup Spark Scala SBT in Eclipse
-
What Are The Most Important Metrics to Monitor in Kafka ?
-
Fix Kafka Error – “java.io.IOException: Map failed”
-
How to Send Large Messages in Kafka ?
-
How To Code SparkSQL in PySpark – Examples Part 1
-
How To Save & Reload a Python Machine Learning Model using Pickle ?
-
How To Install Google Cloud GCP Command Line Utility gcloud ?
-
How To Fix – Spark Error “A master URL must be set” ?
you must be logged in to the server (unauthorized) kubectl, aks error: you must be logged in to the server (unauthorized) kubectl get pods error: you must be logged in to the server (unauthorized), error you must be logged in to the server (unauthorized) codebuild, error: you must be logged in to the server (unauthorized) gke, aws-auth configmap, error you must be logged in to the server (unauthorized) openshift, error you must be logged in to the server (unauthorized),
if( aicp_can_see_ads() ) {
}
if( aicp_can_see_ads() ) {
}
if( aicp_can_see_ads() ) {
}
This error occurs when you try to access from a user other than the who created the EKS Cluster.You get an authorization error when your AWS Identity and Access Management (IAM) entity isn’t authorized by the role-based access control (RBAC) configuration of the Amazon EKS cluster. This happens when the Amazon EKS cluster is created by an IAM user or role that’s different from the one used by aws-iam-authenticator. Initially, only the creator of the Amazon EKS cluster has system:masters permissions to configure the cluster. To extend system:masters permissions to other users and roles, you must add the aws-auth ConfigMap to the configuration of the Amazon EKS cluster. The ConfigMap allows other IAM entities, such as users and roles, to access the Amazon EKS cluster.
Solution:
In the following steps, the cluster creator is cluster_creator, and the user that doesn’t currently have access to the cluster but needs access is designated_user.
Add designated_user to the ConfigMap if cluster_creator is an IAM user
1. Use SSH to connect to the kubectl instance.
2. In the AWS Command Line Interface (CLI), run the following command:
[root@ip- ~]# aws sts get-caller-identity
{
«UserId»: «AIDA6JXXXXXXXXX»,
«Account»: «9828XXXXXX»,
«Arn»: «arn:aws:iam::9828XXXXXX:user/xxxx@cnxxxxxx»
}
[root@ip- ~]#
3. To list the pods running in the cluster of the default namespace, run the following kubectl command:
kubectl get pods
The output shows the following: “error: You must be logged in to the server (Unauthorized).” This error means that designated_user doesn’t have authorization to access the Amazon EKS cluster.
4. To configure the AWS access key ID and the AWS secret access key of cluster_creator, run the following command:
aws configure
5. To verify that cluster_creator has access to the cluster, run the following command:
kubectl get pods
You shouldn’t get an unauthorized error message. The output should list all the pods running in the default namespace. If the output shows that no resources are found, then no pods are running.
6. To give designated_user access to the cluster, add the mapUsers section to your aws-auth.yaml file. See the following example aws-auth.yaml file from Managing Users or IAM Roles for your Cluster:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
— rolearn: arn:aws:iam::11122223333:role/EKS-Worker-NodeInstanceRole-1I00GBC9U4U7B
username: system:node:{{EC2PrivateDNSName}}
groups:
— system:bootstrappers
— system:nodes
7. Add designated_user to the mapUsers section of the aws-auth.yaml file in step 6, and then save the file. See the following example:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
— rolearn: arn:aws:iam::11122223333:role/EKS-Worker-NodeInstanceRole-1I00GBC9U4U7B
username: system:node:{{EC2PrivateDNSName}}
groups:
— system:bootstrappers
— system:nodes
mapUsers: |
— userarn: arn:aws:iam::11122223333:user/designated_user
username: designated_user
groups:
— system:masters
Note: The username in the preceding example is the name that Kubernetes uses to map to the IAM entity passed in userarn.
8. To apply the new ConfigMap to the RBAC configuration of the cluster, run the following command:
kubectl apply -f aws-auth.yaml
9. To change the AWS CLI configuration again to use the credentials of designated_user, run the following command:
aws configure
10. To verify that designated_user has access to the cluster, run the following command:
kubectl get pods
You shouldn’t get an unauthorized error message. The output should list all the pods running in the default namespace. If the output shows that no resources are found, then no pods are running.