I have Kubernetes with ClusterRoles defined for my users and permissions by (RoleBindings) namespaces.
I want these users could be accessed into the Kubernetes Dashboard with custom perms. However, when they try to log in when using kubeconfig option that’s got this message:
"Internal error (500): Not enough data to create auth info structure."
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md — This guide is only for creating ADMIN users, not users with custom perms or without privileges… (edited)
asked Dec 9, 2021 at 9:41
sincorchetessincorchetes
3111 gold badge4 silver badges12 bronze badges
2
Update SOLVED:
You have to do this:
- Create ServiceAccount per user
apiVersion: v1
kind: ServiceAccount
metadata:
name: NAME-user
namespace: kubernetes-dashboard
- Adapt the RoleBinding adding this SA
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: PUT YOUR CR HERE
namespace: PUT YOUR NS HERE
subjects:
- kind: User
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- kind: ServiceAccount
name: NAME-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- Get the token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/NAME-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
- Add token into your kubeconfig file. Your kb should be contain something like this:
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED ---> ADD TOKEN HERE
- Login
answered Dec 9, 2021 at 23:49
sincorchetessincorchetes
3111 gold badge4 silver badges12 bronze badges
2
A few thoughts for those who might end up here from search. The reason why my ~/.kube/config
yaml file did not work in dashboard 1.8 was because it did not contain a token or a username with password. Searching for Not enough data to create auth info structure
in the dashboard’s source code clearly shows that this is what was expected in a file you upload. The same was in @txg1550759’s case.
The yaml file I was trying to authenticate with came from /etc/kubernetes/admin.conf
, which was generated by kubeadm 1.7 back in July. I saw other admin files since then that were generated by kops
– these did contain a password if I remember correctly. So perhaps the lack of a token or a password in kubeconfig
is some kind of a legacy thing or a kubeadm-specific thing, not sure.
I ran kubectl get clusterRoles
and kubectl get clusterRoleBindings
and saw an item called cluster-admin
in both. However unlike other role bindings (e.g. tiller-cluster-rule
), the cluster-admin
one referred to something called apiGroup
instead of ServiceAccount
(to which a token can belong). Check out the difference in the bottom of each output:
kubectl edit ClusterRoleBinding tiller-cluster-rule
↓
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: 2017-07-23T16:34:40Z name: tiller-cluster-rule resourceVersion: "2328" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller-cluster-rule uid: d3249b5d-6fc4-1227-8920-5250000643887 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: # ← here - kind: ServiceAccount name: tiller namespace: kube-system
kubectl edit ClusterRoleBinding cluster-admin
↓
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2017-07-23T16:08:15Z labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin resourceVersion: "118" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin uid: 2224ba97-6fc4-1227-8920-5250000643887 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: # ← here - apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters
This suggests that my cluster probably does not have a dedicated ‘root’ service account per se. That’s why ~/.kube/config
works for kubectl
without having a token or a username and password in it, but does not work for the dashboard.
Nevertheless, I could get into the dashboard by authenticating myself as other ServiceAccounts and this worked well. Depending on the privileges of a service account I picked, the dashboard was giving me access to different resources, which is great! Here’s an example of getting a token for the service account called tiller
to authenticate (you’ll have it if you use helm
):
kubectl describe serviceaccount tiller -n kube-system
↓
Name: tiller Namespace: kube-system Labels: <none> Annotations: <none> Image pull secrets: <none> Mountable secrets: tiller-token-854dx Tokens: tiller-token-854dx
kubectl describe secret tiller-token-854dx -n kube-system
↓
... Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: ×××
Copy token ×××
and paste it into the dashboard’s login screen.
The useful thing about the tiller
service account in my case is that it’s bound to cluster-admin
cluster role (see the yaml above). This is because tiller needs to be able to launch pods, set up ingress rules, edit secrets, etc. Such a role binding is not the case for any cluster, but it may be a default thing in the simple setups. If that’s the case, using tiller
‘s token in the dashboard makes you the ‘root‘ user, because this implies that you have the cluster-admin
cluster role.
Finally, my upgrade from dashboard 1.6 to 1.8 can be considered as finished! 😄
All this RBAC stuff is way too advanced for me to be honest, so it can be that I‘ve done something wrong. I guess that a proper solution would be to create a new service account and a new role binding from scratch and then use that token in the dashboard instead of the tiller’s one. However I’ll probably stay with my tiller’s token for some time until I get energy for switching to a proper solution. Could anyone please confirm or correct my thoughts?
Kubernetes Dashboard — Internal error (500): Not enough data to create auth info structure
Questions : Kubernetes Dashboard — Internal error (500): Not enough data to create auth info structure
2023-02-06T20:48:51+00:00 2023-02-06T20:48:51+00:00
923
I have Kubernetes with ClusterRoles defined solved uvdos dashboard for my users and permissions by solved uvdos dashboard (RoleBindings) namespaces.
I want these solved uvdos dashboard users could be accessed into the Kubernetes solved uvdos dashboard Dashboard with custom perms. However, when solved uvdos dashboard they try to log in when using kubeconfig solved uvdos dashboard option that’s got this message:
"Internal error (500): Not enough data to create auth info structure."
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md solved uvdos dashboard — This guide is only for creating ADMIN solved uvdos dashboard users, not users with custom perms or solved uvdos dashboard without privileges… (edited)
Total Answers 1
28
Answers 1 : of Kubernetes Dashboard — Internal error (500): Not enough data to create auth info structure
Update SOLVED:
You have to do this:
- Create ServiceAccount per user
apiVersion: v1
kind: ServiceAccount
metadata:
name: NAME-user
namespace: kubernetes-dashboard
- Adapt the RoleBinding adding this SA
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: PUT YOUR CR HERE
namespace: PUT YOUR NS HERE
subjects:
- kind: User
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- kind: ServiceAccount
name: NAME-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- Get the token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/NAME-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
- Add token into your kubeconfig file. Your kb should be contain something like this:
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED ---> ADD TOKEN HERE
- Login
0
2023-02-06T20:48:51+00:00 2023-02-06T20:48:51+00:00Answer Link
mRahman
Kubernetes Dashboard — Internal error (500): Not enough data to create auth info structure
2023-02-06T20:48:51+00:00 2023-02-06T20:48:51+00:00
133
I have Kubernetes with ClusterRoles defined addcodings_dashboard for my users and permissions by addcodings_dashboard (RoleBindings) namespaces.
I want these addcodings_dashboard users could be accessed into the Kubernetes addcodings_dashboard Dashboard with custom perms. However, when addcodings_dashboard they try to log in when using kubeconfig addcodings_dashboard option that’s got this message:
"Internal error (500): Not enough data to create auth info structure."
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md addcodings_dashboard — This guide is only for creating ADMIN addcodings_dashboard users, not users with custom perms or addcodings_dashboard without privileges… (edited)
Total Answers 1
25
Answers 1 : of Kubernetes Dashboard — Internal error (500): Not enough data to create auth info structure
Update SOLVED:
You have to do this:
- Create ServiceAccount per user
apiVersion: v1
kind: ServiceAccount
metadata:
name: NAME-user
namespace: kubernetes-dashboard
- Adapt the RoleBinding adding this SA
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: PUT YOUR CR HERE
namespace: PUT YOUR NS HERE
subjects:
- kind: User
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- kind: ServiceAccount
name: NAME-user
namespace: kubernetes-dashboard
roleRef:
kind: ClusterRole
name: PUT YOUR CR HERE
apiGroup: 'rbac.authorization.k8s.io'
- Get the token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/NAME-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
- Add token into your kubeconfig file. Your kb should be contain something like this:
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED ---> ADD TOKEN HERE
- Login
2023-02-06T20:48:51+00:00 2023-02-06T20:48:51+00:00Answer Link
mRahman
After looking at this answer How to sign in kubernetes dashboard? and source code figured the kubeconfig authentication.
After kubeadm install on the master server get the default service account token and add it to config file. Then use the config file to authenticate.
You can use this to add the token.
#!/bin/bash
TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}')
kubectl config set-credentials kubernetes-admin --token="${TOKEN}"
your config file should be looking like this.
kubectl config view |cut -c1-50|tail -10
name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.ey
Only authentication options specified by --authentication-mode
flag is supported in kubeconfig file.
You can auth with token (any token in kube-system
namespace):
$ kubectl get secrets -n kube-system
$ kubectl get secret $SECRET_NAME -n=kube-system -o json | jq -r '.data["token"]' | base64 -d > user_token.txt
and auth with the token (see user_token.txt file).
Two things are going on here
- the Kubernetes Dashboard Application needs an authentication token
- and this authentication token must be linked to an account with sufficient privileges.
The usual way to deploy the Dashboard Application is just
- to
kubectl apply
a YAML file pulled from the configuration recommended at the Github project(for the dashboard):/src/deploy/recommended/kubernetes-dashboard.yaml
⟹ master•v1.10.1 - then to run
kubectl proxy
and access the dashbord through the locally mapped Port 8001.
However this default configuration is generic and minimal. It just maps a role binding with minimal privileges. And, especially on DigitalOcean, the kubeconfig
file provided when provisioning the cluster lacks the actual token, which is necessary to log into the dashboard.
Thus, to fix these shortcomings, we need to ensure there is an account, which has a RoleBinding to the cluster-admin ClusterRole in the Namespace kube-system. The above mentioned default setup just provides a binding to kubernetes-dashboard-minimal
.
We can fix that by deplyoing explicitly
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
And then we also need to get the token for this ServiceAccount…
kubectl get serviceaccount -n kube-system
will list you all service accounts. Check that the one you want/created is presentkubectl get secrets -n kube-system
should list a secret for this account- and with
kubectl describe secret -n kube-system admin-user-token-
XXXXXX you’d get the information about the token.
The other answers to this question provide ample hints, how this access could be scripted in a convenient way (like e.g. using awk, using grep, using kubectl get
with -o=json
and piping to jq, or using -o=jsonpath
)
You can then either:
- store this token into a text file and upload this
- edit your
kubeconfig
file and paste in the token to the admin user provided there