Error unknown command oidc login for kubectl

Describe the bug My kubeconfig file works when using kubectl and the krew installed oidc_login plugin but doesn't work for Lens and when i paste in my config into Lense and press the "Add ...

Describe the bug
My kubeconfig file works when using kubectl and the krew installed oidc_login plugin but doesn’t work for Lens and when i paste in my config into Lense and press the «Add Cluster» button I get the error Error: unknown command «oidc-login» for «kubectl».

To Reproduce
Steps to reproduce the behavior:
Open Lens and load my kubeconfig and the error appears

Expected behavior
I think my browser should open and I should be able to authenticate through the browser. So I just enter my login details and if they are correct I get a OK message in the browser and I expect lens to then be able to connect to the cluster

Environment (please complete the following information):

  • Lens Version: 3.5.0
  • OS: Ubuntu 20.04 LTS
  • Installation method (e.g. snap or AppImage in Linux): snap

Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:

Connecting ...
Error: unknown command "oidc-login" for "kubectl"
Run 'kubectl --help' for usage.
2020/06/26 12:39:40 http: proxy error: getting credentials: exec: exit status 1

Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.

apiVersion: v1
kind: Config

clusters:
- name: mycluster-cluster
  cluster:
    server: https://mycluster.com:6443
    certificate-authority-data: LS089eutj980w5j7t4........
contexts:
- context:
    cluster: mycluster-cluster
    user: oidc
  name: mycluster-context
current-context: mycluster-context

preferences: {}

users:
- name: oidc
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - oidc-login
      - get-token
      - --oidc-issuer-url=https://auth.mydomain.com/auth/realms/MYREALM
      - --oidc-client-id=kubernetes
      - --oidc-client-secret=                                    
      command: kubectl
      env: null

Additional context
Add any other context about the problem here.
I am not sure if it mater but my Desktop OS is Ubuntu Mate and I am using Lens 3.5.0 what was installed as a snap.

LinTechSo

Error: unknown command «oidc-login» for «kubectl»

problem

I installed LENS with sudo snap install kontena-lens --classic command from the snap store
My Kubernetes clusters have OIDC

Screenshot_2022-10-04_20-56-54

Any updates ?

nabokihms

Same problem here. Any help? I can provide any debug info.

tapanhalani

Same issue on my end too, my cluster is local microk8s configured with oidc . I can also provide debug info if needed.

Nokel81

FYI the snapstore version is currently extremely out of date. The latest version can be downloaded via k8slens.dev and has some fixes regarding this, can you try the latest version and see if that resolves this issue?

AlexanderSarson

same problem here.
oidc-login installed with krew and kubectl commands works but lens doesn’t

I’m on Windows though

Nokel81

@AlexanderSarson What version are you using? Have you quit the tray icon recently?

AlexanderSarson

@Nokel81 You were right. it was the tray icon — it works now, thanks :)

Nokel81

There is currently not a known way to get the current «root environment» on Windows so we don’t refresh it ever.

nabokihms

#6317 (comment)
I am on Mac and installed Lens from the official website. If there is any information I can provide to help with debugging, please tell me.

Nokel81

  1. What shell are you using?
  2. Do you see a notification about failing to sync shell env vars?
  3. Logs are at ~/Library/Logs/Lens

nabokihms

@Nokel81

  1. zsh
  2. Nothing like this
  3. The message was the same: unknown command «oidc-login» for «kubectl».

BTW, I upgraded Lens and no everything works as expected. I do not know what actually fixed the problem.

Nokel81

We did some improvements to how is is synced so maybe that helped it.

Glad your setup is writing though.

Nokel81

kontena-lens snap is unsupported. Please upgrade.

demsking

Recommend Projects

  • React photo

    React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo

    Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo

    Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo

    TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo

    Django

    The Web framework for perfectionists with deadlines.

  • Laravel photo

    Laravel

    A PHP framework for web artisans

  • D3 photo

    D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Visualization

    Some thing interesting about visualization, use data art

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo

    Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo

    Microsoft

    Open source projects and samples from Microsoft.

  • Google photo

    Google

    Google ❤️ Open Source for everyone.

  • Alibaba photo

    Alibaba

    Alibaba Open Source for everyone

  • D3 photo

    D3

    Data-Driven Documents codes.

  • Tencent photo

    Tencent

    China tencent open source team.

A commonly cited pain point for teams working with Kubernetes clusters is managing the configuration to connect to the cluster. All to often this ends up being either sending KUBECONFIG files with hardcoded credentials back and forth or fragile custom shell scripts wrapping the AWS or GCP cli’s.

In this post we’ll integrate Kubernetes with Keycloak so that when we execute a kubectl or helm command, if the user is not already authenticated, they’ll be presented with a keycloak browser login where they can enter their credentials. No more sharing KUBECONFIG files and forgetting to export different KUBECONFIG paths!

We’ll also configure group based access control, so we can, for example create a KubernetesAdminstrators group, and have all users in that group given cluster-admin access automatically.

When we remove a user from Keycloak (or remove them from the relevant groups within Keycloak) they will then lose access to the cluster (subject to token expiry).

For this we’ll be using OpenID Connect, more here on how this works.

By default, configuring Kubernetes to support OIDC auth requires passing flags to the kubelet API server. The challenge with this approach is that only one such provider can be configured and managed Kubernetes offerings — e.g. GCP or AWS — use this for their proprietary IAM systems.

To address this we will use kube-oidc-proxy, a tool from Jetstack which allows us to connect to a proxy server which will manage OIDC authentication and use impersonation to give the authenticating user the required permissions. This approach has the benefit of being universal across clusters, so we don’t have to follow different approaches for our managed vs unmanaged clusters.

This post is part of a series on single sign on for Kubernetes.

  1. Contents and overview

  2. Installing OpenLDAP

  3. Installing Keycloak

  4. Linking Keycloak and OpenLDAP

  5. OIDC Kubectl Login with Keycloak

  6. Authenticate any web app using ingress annotations

  7. Gitea (requires LDAP)

  8. Simple Docker Registry

  9. Harbor Docker Registry with ACL

Pre-requisites

This assumes you have CLI access to a Kubernetes cluster, will be working in a namespace called identity and have both Helm 3 and Kubectl installed and working locally. Finally it assumes that you’re using NGINX for Ingress along with cert manager for SSL certificates with a Cluster Issuer called letsencrypt-production.

If your configuration is different, the majority of the steps will be the same, but you’ll need to change the ingress annotations accordingly.

The source for this series of tutorials can be found here: https://github.com/TalkingQuickly/kubernetes-sso-guide and cloned with:

All commands in the tutorial assume that they’re being executed from the root of this cloned repository.

This also assumes you’ve already followed the Installing Keycloak section and have a functioning Keycloak instance you can login to with administrator rights.

Setting up Keycloak

First we’ll create a new client in Keycloak with Client ID: kube-oidc-proxy and client protocol: openid-connect. We’ll then configure the following parameters for this client:

  • Access Type: confidential, this is required for a client secret to be generated
  • Valid Redirect URLs: http://localhost:8000 and http://localhost:18000. This is used by kubelogin as a callback when we login with kubectl so a browser window can be opened for us to authenticate with keycloak.

We can then save this new client and a new «Credentials» tab will appear. We’ll need the generated client secret along with our client id (kube-oidc-proxy) for later steps.

Setting up Kube OIDC Proxy

Having created the client, we can now create our configuration for kube-oidc-proxy. A sample configuration can be found in kube-oidc-proxy/values-kube-oidc.yml and looks like this:

oidc:
  clientId: kube-oidc-proxy
  issuerUrl: https://sso.ssotest.staging.talkingquickly.co.uk/auth/realms/master
  usernameClaim: sub

extraArgs:
  v: 10

ingress:
  enabled: true
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
  hosts:
    - host: kube.ssotest.staging.talkingquickly.co.uk
      paths:
        - /
  tls:
    - secretName: oidc-proxy-tls
      hosts:
        - kube.ssotest.staging.talkingquickly.co.uk

The important things to customise here are:

  • The issuerUrl, this is the URL of our keycloak instance, including the realm (in this case we’re using the default master realm)
  • The hostnames within the ingress definition. This URL will be a second Kubernetes API URL, so once our SSO login is setup, our kubeconfig files will point at this URL instead of the default cluster endpoint

The extraArgs v: 10 sets kube-oidc-proxy to output verbose logging methods which is useful for debugging issues. In production this line can be removed.

We can then install kube-oidc-proxy with:

helm upgrade --install kube-oidc-proxy ./charts/kube-oidc-proxy --values kube-oidc-proxy/values-kube-oidc.yml

With kube-oidc-proxy up and running, we can now configure kubectl to use it. The simplest way to do this is with a kubectl plugin called kubelogin. With this plugin installed, when you execute a kubectl command, it will open a browser window for the user to login via Keycloak. It will then handle refreshing tokens and subsequently re-authorising if the session expires.

Installation instructions for kubelogin are here, if you use homebrew, it’s as simple as brew install int128/kubelogin/kubelogin, otherwise I recommend installing krew to manage kubectl plugins which will then allow you to install the plugin with kubectl krew install oidc-login.

We’ll then want to create a kubeconfig.yml file with the following contents (there’s an example in kubelogin/kuebconfig.yml):

apiVersion: v1
clusters:
- cluster:
    server: https://kube.ssotest.staging.talkingquickly.co.uk
  name: default
contexts:
- context:
    cluster: default
    namespace: identity
    user: oidc
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: oidc
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - oidc-login
      - get-token
      # - -v1
      - --oidc-issuer-url=https://sso.ssotest.staging.talkingquickly.co.uk/auth/realms/master
      - --oidc-client-id=kube-oidc-proxy
      - --oidc-client-secret=a32807bc-4b5d-40b7-8391-91bb2b80fd30
      - --oidc-extra-scope=email
      - --grant-type=authcode
      command: kubectl
      env: null
      provideClusterInfo: false

Replacing:

  • The server url the ingress url we chose for kube-oidc-proxy
  • The oidc-issuer-url with the same keycloak url we used in the kube-oidc-proxy configuration
  • The value of oidc-client-secret with the secret key we extracted from the credentials tab of the client in Keycloak
  • Optionally uncommenting the -v1 line if you want to see verbose logging output

We can then execute

export KUBECONFIG=./kubelogin/kubeconfig.yml
kubectl get pods  

Managing your kubeconfig files is beyond the scope of this tutorial but if you aren’t already I strongly recommend some combination of direnv and kubectx. Both my Debian Remote Dev Env Environment and OSX Setup provide these tools out of the box.

It’s important to note that the export KUBECONFIG=./kubelogin/kubeconfig.yml is local to an individual terminal session, so if you switch to a new terminal tab or close and re-open your terminal, it will be gone and you’ll fallback to using whichever KUBECONFIG envrironment variable your shell is set to use by default.

When we execute the above we’ll be sent out to a browser to login via Keycloak and once completed we’ll be logged in.

We will however see an error along the lines of:

Error from server (Forbidden): pods is forbidden: User "oidcuser:7d7c2183-3d96-496a-9516-dda7538854c9" cannot list resource "pods" in API group "" in the namespace "identity"

Although our user is authenticated, e.g. Kubernetes knows that the current user is oidcuser:7d7c2183-3d96-496a-9516-dda7538854c9, this user is currently not authorised to do anything.

We can fix this by creating a cluster role binding which binds our user to the cluster-admin role which is the «superuser» role on Kubernetes.

We’ll need to execute this in a separate temrinal, e.g. one in which we have not run export KUBECONFIG=./kubelogin/kubeconfig.yml and so KUBECONFIG is still pointing at a kubeconfig file which gives us cluster-admin access to the cluster.

kubectl create clusterrolebinding oidc-cluster-admin --clusterrole=cluster-admin --user='oidcuser:OUR_USER_ID'

Replacing OURUSERID with our login users id from Keycloak (or from the error message above).

Note the oidcuser: prefix which is added due to the usernamePrefix: "oidcuser:" prefix configuration line in our Kube OIDC Proxy values file. This prevents users defined in Keycloak from conflicting with any kubernetes internal users.

Keycloak login to kubernetes with groups

The above setup allows us to use kubectl while authenticating with our keycloak user. However for each user we have to create an individual cluster role binding assigning them permissions. This is manual and becomes painful for anything beyond a small handful of users.

The solution to this lies in groups, we’ll configure our kubernetes oidc implementation to be aware of Keycloak groups. We can then create a KubernetesAdmin group in Keycloak and have all users in this group given cluster-admin permissions automatically using a single ClusterRoleBinding.

Begin by creating a KubernetesAdmins group in Keycloak and then creating a new user and adding them to this group.

We then need to update our Keycloak client to include the groups the user is a member of as part of the JWT.

We do this by going back to our kube-oidc-client entry under Keycloak clients and choosing the mappers tab then «Create».

We then enter the following:

  • Name: Groups
  • Mapper Type: Group Membership
  • Full Group Path: Off

And then choosing save.

If we uncomment the # - -v1 line in our kubelogin/kubeconfig.yml file, remove the contents of ~/.kube/cache/oidc-login/ and then execute a kubectl command e.g. kubectl get pods then we’ll be asked to login and again and then we’ll see that the decoded JWT now contains our groups, e.g:

{
  ...                                         
  "groups": [                                                       
    "DockerRegistry",                                             
    "Administrators",
    "KubernetesAdmins"
  ],             
  ...
}

We can then create cluster role binding to give anyone with the KubernetesAdmin group, cluster-admin access. Our cluster role binding looks like this:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: oidc-admin-group
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: oidcgroup:KubernetesAdmins

Note that oidcgroup which is added due to the groupsPrefix: "oidcgroup:" in our Kube OIDC Proxy values configuration. This prevents keycloak groups from colliding with in-built kubernetes groups.

We can apply the above with:

kubectl apply -f ./group-auth/cluster-role-binding.yml

And then delete our user specific cluster role binding with:

kubectl delete clusterrolebinding oidc-cluster-admin

We can confirm that our groups login works with a simple kubectl get pods.

We can take this further by creating more restrictive cluster roles (or using more of the in-built ones) to do things like creating users that only have access to certain namespaces within our cluster.

  1. Contents and overview

  2. Installing OpenLDAP

  3. Installing Keycloak

  4. Linking Keycloak and OpenLDAP

  5. OIDC Kubectl Login with Keycloak

  6. Authenticate any web app using ingress annotations

  7. Gitea (requires LDAP)

  8. Simple Docker Registry

  9. Harbor Docker Registry with ACL

Понравилась статья? Поделить с друзьями:
  • Error unknown command diff for helm
  • Error unknown chunk type cac2
  • Error unknown authentication strategy local
  • Error unknown authentication strategy jwt
  • Error unity log valheim