Error kubernetes cluster unreachable the server has asked for the client to provide credentials

Terraform Version 0.12.19 Affected Resource(s) helm_release Terraform Configuration Files locals { kubeconfig = <<KUBECONFIG apiVersion: v1 clusters: - cluster: server: ${aws_eks_cluster.my_c...

Terraform Version

0.12.19

Affected Resource(s)

  • helm_release

Terraform Configuration Files

locals {
 kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
    server: ${aws_eks_cluster.my_cluster.endpoint}
    certificate-authority-data: ${aws_eks_cluster.my_cluster.certificate_authority.0.data}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "${aws_eks_cluster.my_cluster.name}"
KUBECONFIG
}

resource "local_file" "kubeconfig" {
  content  = local.kubeconfig
  filename = "/home/terraform/.kube/config"
}

resource "null_resource" "custom" {
  depends_on    = [local_file.kubeconfig]

  # change trigger to run every time
  triggers = {
    build_number = "${timestamp()}"
  }

  # download kubectl
  provisioner "local-exec" {
    command = <<EOF
      set -e

      curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
      chmod +x aws-iam-authenticator
      mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin

      echo $PATH

      aws-iam-authenticator

      curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
      chmod +x kubectl

      ./kubectl get po
    EOF
  }
}

resource "helm_release" "testchart" {
  depends_on    = [local_file.kubeconfig]
  name          = "testchart"
  chart         = "../../../resources/testchart"
  namespace     = "default"
}

Debug Output

Note that

  • kubectl get po reaches the cluster and reports «No resources found in default namespace.»
  • while helm_release reports: «Error: Kubernetes cluster unreachable»
  • In earlier testing it errored with «Error: stat /home/terraform/.kube/config». Now that I write the local file to that location, it no longer errors. I assume that means it successfully reads the kube config.

https://gist.github.com/eeeschwartz/021c7b0ca66a1b102970f36c42b23a59

Expected Behavior

The testchart is applied

Actual Behavior

The helm provider is unable to reach the EKS cluster.

Steps to Reproduce

On terraform.io:

  1. terraform apply

Important Factoids

Note that kubectl is able to communicate with the cluster. But something about the terraform.io environment, the .helm/config, or the helm provider itself renders the cluster unreachable.

Note of Gratitude

Thanks for all the work getting helm 3 support out the door. Holler if I’m missing anything obvious or can help diagnose further.

I’m trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:

Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource "helm_release" "nginx":
resource "helm_release" "nginx" {

This error repeats for metrics_server, lb_ingress, argocd, but cluster-autoscaler throws:

Warning: Helm release "cluster-autoscaler" was created but has a failed status.
with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource "helm_release" "cluster_autoscaler":
resource "helm_release" "cluster_autoscaler" {

My main.tf looks like this:

terraform {

  backend "remote" {}

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.66.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.7.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

provider "aws" {
  access_key = "xxx"
  secret_key = "xxx"
  region     = "xxx"
  assume_role {
    role_arn = "xxx"
  }
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    token                  = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  }
}

My eks.tf looks like this:

module "eks-ssp" {
    source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"

    # EKS CLUSTER
    tenant            = "DevOpsLabs2b"
    environment       = "dev-test"
    zone              = ""
    terraform_version = "Terraform v1.1.4"

    # EKS Cluster VPC and Subnet mandatory config
    vpc_id             = "xxx"
    private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]

    # EKS CONTROL PLANE VARIABLES
    create_eks         = true
    kubernetes_version = "1.19"

  # EKS SELF MANAGED NODE GROUPS
    self_managed_node_groups = {
    self_mg = {
      node_group_name        = "DevOpsLabs2b"
      subnet_ids             = ["xxx","xxx", "xxx", "xxx"]
      create_launch_template = true
      launch_template_os     = "bottlerocket"       # amazonlinux2eks  or bottlerocket or windows
      custom_ami_id          = "xxx"
      public_ip              = true                   # Enable only for public subnets
      pre_userdata           = <<-EOT
            yum install -y amazon-ssm-agent 
            systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent 
        EOT

      disk_size     = 10
      instance_type = "t2.small"
      desired_size  = 2
      max_size      = 10
      min_size      = 0
      capacity_type = "" # Optional Use this only for SPOT capacity as  capacity_type = "spot"

      k8s_labels = {
        Environment = "dev-test"
        Zone        = ""
        WorkerType  = "SELF_MANAGED_ON_DEMAND"
      }

      additional_tags = {
        ExtraTag    = "t2x-on-demand"
        Name        = "t2x-on-demand"
        subnet_type = "public"
      }
      create_worker_security_group = false # Creates a dedicated sec group for this Node Group
    },
  }
}

    enable_amazon_eks_vpc_cni             = true
        amazon_eks_vpc_cni_config = {
        addon_name               = "vpc-cni"
        addon_version            = "v1.7.5-eksbuild.2"
        service_account          = "aws-node"
        resolve_conflicts        = "OVERWRITE"
        namespace                = "kube-system"
        additional_iam_policies  = []
        service_account_role_arn = ""
        tags                     = {}
    }
    enable_amazon_eks_kube_proxy          = true
        amazon_eks_kube_proxy_config = {
        addon_name               = "kube-proxy"
        addon_version            = "v1.19.8-eksbuild.1"
        service_account          = "kube-proxy"
        resolve_conflicts        = "OVERWRITE"
        namespace                = "kube-system"
        additional_iam_policies  = []
        service_account_role_arn = ""
        tags                     = {}
    }

    #K8s Add-ons
    enable_aws_load_balancer_controller   = true
    enable_metrics_server                 = true
    enable_cluster_autoscaler             = true
    enable_aws_for_fluentbit              = true
    enable_argocd                         = true
    enable_ingress_nginx                  = true

    depends_on = [module.eks-ssp.self_managed_node_groups]
}

Question:

I’m trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:

Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource "helm_release" "nginx":
resource "helm_release" "nginx" {

This error repeats for metrics_server, lb_ingress, argocd, but cluster-autoscaler throws:

Warning: Helm release "cluster-autoscaler" was created but has a failed status.
with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource "helm_release" "cluster_autoscaler":
resource "helm_release" "cluster_autoscaler" {

My main.tf looks like this:

terraform {

  backend "remote" {}

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.66.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.7.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

provider "aws" {
  access_key = "xxx"
  secret_key = "xxx"
  region     = "xxx"
  assume_role {
    role_arn = "xxx"
  }
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    token                  = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  }
}

My eks.tf looks like this:

module "eks-ssp" {
    source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"

    # EKS CLUSTER
    tenant            = "DevOpsLabs2b"
    environment       = "dev-test"
    zone              = ""
    terraform_version = "Terraform v1.1.4"

    # EKS Cluster VPC and Subnet mandatory config
    vpc_id             = "xxx"
    private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]

    # EKS CONTROL PLANE VARIABLES
    create_eks         = true
    kubernetes_version = "1.19"

  # EKS SELF MANAGED NODE GROUPS
    self_managed_node_groups = {
    self_mg = {
      node_group_name        = "DevOpsLabs2b"
      subnet_ids             = ["xxx","xxx", "xxx", "xxx"]
      create_launch_template = true
      launch_template_os     = "bottlerocket"       # amazonlinux2eks  or bottlerocket or windows
      custom_ami_id          = "xxx"
      public_ip              = true                   # Enable only for public subnets
      pre_userdata           = <<-EOT
            yum install -y amazon-ssm-agent 
            systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent 
        EOT

      disk_size     = 10
      instance_type = "t2.small"
      desired_size  = 2
      max_size      = 10
      min_size      = 0
      capacity_type = "" # Optional Use this only for SPOT capacity as  capacity_type = "spot"

      k8s_labels = {
        Environment = "dev-test"
        Zone        = ""
        WorkerType  = "SELF_MANAGED_ON_DEMAND"
      }

      additional_tags = {
        ExtraTag    = "t2x-on-demand"
        Name        = "t2x-on-demand"
        subnet_type = "public"
      }
      create_worker_security_group = false # Creates a dedicated sec group for this Node Group
    },
  }
}

    enable_amazon_eks_vpc_cni             = true
        amazon_eks_vpc_cni_config = {
        addon_name               = "vpc-cni"
        addon_version            = "v1.7.5-eksbuild.2"
        service_account          = "aws-node"
        resolve_conflicts        = "OVERWRITE"
        namespace                = "kube-system"
        additional_iam_policies  = []
        service_account_role_arn = ""
        tags                     = {}
    }
    enable_amazon_eks_kube_proxy          = true
        amazon_eks_kube_proxy_config = {
        addon_name               = "kube-proxy"
        addon_version            = "v1.19.8-eksbuild.1"
        service_account          = "kube-proxy"
        resolve_conflicts        = "OVERWRITE"
        namespace                = "kube-system"
        additional_iam_policies  = []
        service_account_role_arn = ""
        tags                     = {}
    }

    #K8s Add-ons
    enable_aws_load_balancer_controller   = true
    enable_metrics_server                 = true
    enable_cluster_autoscaler             = true
    enable_aws_for_fluentbit              = true
    enable_argocd                         = true
    enable_ingress_nginx                  = true

    depends_on = [module.eks-ssp.self_managed_node_groups]
}

Answer:

OP has confirmed in the comment that the problem was resolved:

Of course. I think I found the issue. Doing «kubectl get svc» throws: «An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxx:user/terraform_deploy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxx:user/terraform_deploy»
Solved it by using my actual role, that’s crazy. No idea why it was calling itself.

For similar problem look also this issue.

If you have better answer, please add a comment about this, thank you!

Source: Stackoverflow.com

Sorry, this post was removed by Reddit’s spam filters.

Reddit’s automated bots frequently filter posts it thinks might be spam.

level 1

This is a known issue with terraform and Kubernetes. You can see the warning here: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs. You cannot create the cluster and deploy resources onto it in the same terraform run. We use two separate modules for this and since then have had 0 issues. A «physical» module that deploys eks and other physical resources and a «logical» module that deploys our manifests. Using Terragrunt we can deploy them both at the same time but they technically count as different terraform runs so it gets around this issue.

level 2

This. And by different modules it should be made clear that it should be different root modules meaning two unique life cycles of terraform.

level 2

Comment removed by moderator · 1 yr. ago

level 1

The same happened to me just recently. Terraform didn’t know where to read configs. This resolved the issue:

export KUBE_CONFIG_PATH=~/.kube/config

Problem scenario
You get a message «kubernetes cluster unreachable» when running a helm, kubectl, az, or eks command. What should you do?

Possible solution #1
Has a router been reconfigured? Has a new firewall rule been imposed? Has a data center gone down that housed the cluster? Did you receive an email about a maintenance window or a configuration change?

Possible solution #2
What are the permissions of the relevant .yaml file? Can you try changing them? (Idea taken from https://github.com/k3s-io/k3s/issues/1126.)

Possible solution #3
If you were running a kubectl command, can you try to run commands like these?

sudo cp ~/.kube/config ~/.kube/bak.config.bak
kubectl config view --raw >~/.kube/config

(Idea taken from https://github.com/k3s-io/k3s/issues/1126.)

Possible solution #4
Warning: this may install something undesirable. Use this with caution.
If you were using helm, Rancher and K3s, can you try running a command like this?

sudo helm install harbor/harbor --version 1.3.0 --generate-name --kubeconfig /etc/rancher/k3s/k3s.yaml

(Idea taken from https://github.com/k3s-io/k3s/issues/1126.)

Possible solution #5
Are you using Rancher and K3s? Try running this:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

(Idea taken from https://github.com/k3s-io/k3s/issues/1126 )

Possible solution #6
Can you ping the remote Kubernetes server? Are you not on a server that has access to the remote network? Is the cluster available from the network you are on? Are you running the command from a workstation that is not connected to a VPN?

Possible solution #7
Are you using GKE? If so, read this posting: https://forum.rasa.com/t/rasa-x-kubernetes-cluster-unreachable-error-using-github-actions/37073/8

Possible solution #8
Verify configuration files. If there are two clusters configured, can you try to only use one cluster? Try each cluster one-at-a-time. This may help you pinpoint the problem.

Possible solution #9
Run commands like these:

kubectl config view
kubectl config use-context CONTEXT-CHOSEN-FROM-PREVIOUS-COMMAND-OUTPUT
kubectl config --help

(This was adapted from https://stackoverflow.com/questions/49260135/unable-to-connect-to-the-server-dial-tcp-i-o-time-out.)


If you want assistance troubleshooting a network problem, see this posting.


Содержание

  1. How Do You Troubleshoot the Kubernetes Error “cluster unreachable”?
  2. Terraform.io to EKS «Error: Kubernetes cluster unreachable» #400
  3. Comments
  4. Terraform Version
  5. Affected Resource(s)
  6. Terraform Configuration Files
  7. Debug Output
  8. Expected Behavior
  9. Actual Behavior
  10. Steps to Reproduce
  11. Important Factoids
  12. Note of Gratitude
  13. helm 3.0-beta 3 tries to reach cluster when —dry-run is used #6404
  14. Comments
  15. Helm 3 unable to reach cluster #7086
  16. Comments
  17. Footer
  18. 2.6.0 provider version causing Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion #893
  19. Comments
  20. ⚠️ NOTE FROM MAINTAINERS ⚠️
  21. Terraform, Provider, Kubernetes and Helm Versions
  22. Affected Resource(s)
  23. Terraform Configuration Files
  24. Debug Output
  25. Panic Output
  26. Steps to Reproduce
  27. Expected Behavior
  28. Actual Behavior
  29. Important Factoids
  30. References
  31. Community Note

How Do You Troubleshoot the Kubernetes Error “cluster unreachable”?

Problem scenario
You get a message «kubernetes cluster unreachable» when running a helm, kubectl, az, or eks command. What should you do?

Possible solution #1
Has a router been reconfigured? Has a new firewall rule been imposed? Has a data center gone down that housed the cluster? Did you receive an email about a maintenance window or a configuration change?

Possible solution #2
What are the permissions of the relevant .yaml file? Can you try changing them? (Idea taken from https://github.com/k3s-io/k3s/issues/1126.)

Possible solution #3
If you were running a kubectl command, can you try to run commands like these?

Possible solution #4
Warning: this may install something undesirable. Use this with caution.
If you were using helm, Rancher and K3s, can you try running a command like this?

Possible solution #5
Are you using Rancher and K3s? Try running this:

Possible solution #6
Can you ping the remote Kubernetes server? Are you not on a server that has access to the remote network? Is the cluster available from the network you are on? Are you running the command from a workstation that is not connected to a VPN?

Possible solution #8
Verify configuration files. If there are two clusters configured, can you try to only use one cluster? Try each cluster one-at-a-time. This may help you pinpoint the problem.

Possible solution #9
Run commands like these:

If you want assistance troubleshooting a network problem, see this posting.

Источник

Terraform.io to EKS «Error: Kubernetes cluster unreachable» #400

Terraform Version

Affected Resource(s)

Terraform Configuration Files

Debug Output

  • kubectl get po reaches the cluster and reports «No resources found in default namespace.»
  • while helm_release reports: «Error: Kubernetes cluster unreachable»
  • In earlier testing it errored with «Error: stat /home/terraform/.kube/config». Now that I write the local file to that location, it no longer errors. I assume that means it successfully reads the kube config.

Expected Behavior

The testchart is applied

Actual Behavior

The helm provider is unable to reach the EKS cluster.

Steps to Reproduce

Important Factoids

Note that kubectl is able to communicate with the cluster. But something about the terraform.io environment, the .helm/config , or the helm provider itself renders the cluster unreachable.

Note of Gratitude

Thanks for all the work getting helm 3 support out the door. Holler if I’m missing anything obvious or can help diagnose further.

The text was updated successfully, but these errors were encountered:

The token auth configuration below ultimately worked for me. Perhaps this should be the canonical approach for Terraform Cloud -> EKS, rather than using

I don’t see how ths could possibly work, with Helm 3, it seems to be completely broken.
Below is my configuration and I can’t connect to the cluster.
My kubernetes provider works but not the kubernetes block within Helm which has the same settings.

seeing same issue

@eeeschwartz can confirm it’s working with newly created cluster. Here a sample configuration to prove it.

From other perspective with already created cluster I see the same issue and debug=true doesn’t help at all

anyone found a workaround yet ?

@vfiset I believe there is no workaround. It’s just an issue with policies(my guess).

The actual provider doesn’t give you enough debug information. So, you will probably need to run helm install manually to find an issue.

My guess is that the aws-auth config map is blocking access. In the example that @kharandziuk has show here there’s no aws-auth configmap defined. Also it’s worth noting that the usage of helm here is in the same terraform run as the eks run which means that the default credentials for eks are the ones being used to deploy helm.

I have a fairly complicated setup where i’m assuming roles between the different stages of the EKS cluster deployment.

Источник

helm 3.0-beta 3 tries to reach cluster when —dry-run is used #6404

$ helm template /chart_dir —dry-run
Error: Kubernetes cluster unreachable

Output of helm version :
version.BuildInfo

The text was updated successfully, but these errors were encountered:

Can confirm the bug. I tried like this —

I never knew we needed cluster connectivity to do template and didn’t know —dry-run feature existed for template ! This wasn’t present in helm v2. I’ll check the docs. But from the help of the command, doesn’t look like we need any sort of cluster connectivity.

Also, the bug might be related to this PR #6255 which introduced the Kubernetes cluster unreachable error. Gotta dig more to understand this.

It just looks like a bug introduced with #6255. It doesn’t take —dry-run into account before checking. Should be an easy fix.

@bacongobbler so normally template connects to cluster? and hence the dry-run feature?

helm template should not require a connection to the cluster for it to work unless the —validate flag is set.

Internally, helm template is a helm install —dry-run —replace (see here). You could probably reproduce the same issue by calling helm install —dry-run —replace .

Cool. I asked to understand what the expected functionality is.

So, from a user perspective, it looks as though the dry run feature is present because normally there’s some cluster connectivity. Which is not true. So I think we could get rid of the —dry-run flag? Or is there any expected functionality for it? As it also causes confusion for me about what happens when someone uses both —dry-run and —validate

Источник

Helm 3 unable to reach cluster #7086

I am trying to install a helm chart that is publically available and when doing do I get an error Error: Kubernetes cluster unreachable which is preventing me from installing said chart.

Steps to reproduce:

Error message: Error: Kubernetes cluster unreachable

Same error message is produced when appling directly without dry run.

The text was updated successfully, but these errors were encountered:

Can you run kubectl cluster-info ? Can it reach the cluster?

I’m having the same issue here. It doesn’t seem to be looking at my current context.

You can see that helm doesn’t reach the cluster until I explicitly provide the config file that contains my current context.

If it helps, I have multiple files within the

/.kube directory and my $KUBECONFIG contains all of them (i.e. /path/to/first.conf:/path/to/second.conf:/path/to/third.conf ).

Same here with multiple kubeconfig files. As soon as I set $KUBECONFIG to one specific file Helm works perfectly

For those interested, I got tired of the quirky surprises of working with multiple kubeconfigs, so I wrote a tool to manage many kubeconfigs at once and this bug is no longer an issue for me: https://github.com/particledecay/kconf

After install helm on ubuntu i was also experiencing this issue.
I just reran

From Organizing Cluster Access Using kubeconfig Files:
«By default, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the —kubeconfig flag.»

Helm works pretty much the same as kubectl wrt cluster access.

When you have multiple kubeconfig files specified in $KUBECONFIG , then the first file is used by default. You therefore need to set the context if you want to use another file instead. With kubectl , you can use command: kubectl config use-context .
Alternatively, you can also use tools like kubectx for this: https://github.com/ahmetb/kubectx. Or use —kube-context flag when running the Helm command.

@pageanengineer Is this still an issue for you or can this issue be closed?

© 2023 GitHub, Inc.

You can’t perform that action at this time.

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.

Источник

2.6.0 provider version causing Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion #893

⚠️ NOTE FROM MAINTAINERS ⚠️

v1alpha1 of the client authentication API was removed in the Kubernetes client in version 1.24. The latest release of this provider was updated to use 1.24 of the Kubernetes client Go modules, and 3.9 of the upstream Helm module. We know this seems like a breaking change but is expected as API versions marked alpha can be removed in minor releases of the Kubernetes project.

The upstream helm Go module was also updated to use the 1.24 client in helm 3.9 so you will see this issue if you use the helm command directly with a kubeconfig that tries to use the v1alpha1 client authentication API.

AWS users will need to update their config to use the v1beta1 API. Support for v1beta1 was added as default in the awscli in v1.24.0 so you may need to update your awscli package and run aws eks update-kubeconfig again.

Adding this note here as users pinning to the previous version of this provider will not see a fix to this issue the next time they update: you need to update your config to the new version and update your exec plugins. If your exec plugin still only supports v1alpha1 you need to open an issue with them to update it.

Terraform, Provider, Kubernetes and Helm Versions

Affected Resource(s)

Terraform Configuration Files

This is how we set the provider

I tried changing the api_version to client.authentication.k8s.io/v1beta1 but then that gave me a mismatch with the expected value of client.authentication.k8s.io/v1alpha1 .

Debug Output

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

Steps to Reproduce

Expected Behavior

Terraform plans correctly

Actual Behavior

Terraform fails with this error

Important Factoids

Pinning the provider version to the last release 2.5.1 works

A fast way that we pinned our root modules using

References

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

The text was updated successfully, but these errors were encountered:

cc: @jrhouston for visibility

Is this specific to AWS EKS clusters? Are they still on v1alpha? Seems like this also occurred with 2.4.0

Yes, the AWS EKS cluster is using the v1alpha1 apiVersion

It looks like the v1alpha1 authentication API was removed in Kubernetes 1.24 – we upgraded to the 0.24.0 line of k8s dependencies in the latest version of this provider. It feels like a breaking change but removal of alpha APIs is expected in minor version bumps of Kubernetes.

I was able to fix this for EKS by updating the awscli package and changing the api_version in my exec block v1beta1 .

The latest version of the awscli uses this version:

@jrhouston as one who primarily works with AWS, I request that you track Kubernetes dependencies along the lines of the latest Kubernetes version EKS supports, currently 1.22. This would help to preserve compatibility between the provider and EKS clusters. (I understand if people not using EKS feel differently, but you can’t please everyone, so I’m staking my claim.)

@jrhouston how to switch to the v1beta1 version of the API ?
Did it break anything with the different helm packages you had installed while doing so ?

Edit 1:
ah I think I found it

Edit 2:
It works

@jrhouston as one who primarily works with AWS, I request that you track Kubernetes dependencies along the lines of the latest Kubernetes version EKS supports, currently 1.22. This would help to preserve compatibility between the provider and EKS clusters. (I understand if people not using EKS feel differently, but you can’t please everyone, so I’m staking my claim.)

I agree with you in principle, and we do tend to hold off on releasing things that are going to break on the older versions Kubernetes in the main cloud providers.

However, in this case the API contract here is actually between the aws command and the kubernetes client. The apiVersion here is of the YAML that the aws eks get-token command spits out onto stdout. It’s not actually a cluster resource so this will still work on EKS cluster version 1.22 and below – it’s just that you need to update the api_version in the exec block your Terraform config, and potentially update your awscli package to the latest version. You can see they moved to v1beta1 in their changelog a few versions ago.

You may also need to run aws eks update-kubeconfig command if you are using the kubeconfig file.

Perhaps we should add a validator to check if the version specified is v1alpha1 and write out a warning message telling the user what to do here.

If there are any non-EKS users watching this issue I would appreciate if they could chime in on their situation.

Источник

My k8s version v1.17.13
My certificate expired today , so I ran
kubeadm alpha certs renew all
systemctl restart kubelet
on all my master servers.
All the kubectl commands that I ran worked fine .. like
kubectl get nodes , kubectl scale , kubectl describe …
However , running kubectl logs gives the following error
error: You must be logged in to the server (the server has asked for the client to provide credentials

Any idea why …
I believe my ~/.kube/config is ok because I am able to run other kubectl commands. I deleted the kube-apiserver to force to restart .. but still same issue ..

May you please help me with this issue.
Thanks,

asked Jan 9, 2021 at 5:09

bdaoud's user avatar

0

While looking around … I saw this on my worker nodes
Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid

After further troubleshooting … only 2 of my 3 masters were causing this error
kubectl logs error: You must be logged in to the server (the server has asked for the client to provide credentials

After checking a lot of resources ,, I really couldn’t find what is causing the problem , so I decided to reboot each of the 2 failing masters one at a time and that did the trick. I guess some of the pods in kube-system required restarting.

Additionally , I restarted kubelet on all worker nodes , but not sure if this had an effect or not.

Note that in https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal there is no mention about rebooting the masters

A final note .. I am not sure why the cert renew was not as smooth ..
Before running into the kubectl logs problem described in this post …
I ran into this error on my first master («bootstrap-kubelet.conf does not exist» issue which would not allow kubelet to restart) so I had to follow fix in https://stackoverflow.com/questions/56320930/renew-kubernetes-pki-after-expired

answered Jan 9, 2021 at 14:54

bdaoud's user avatar

bdaoudbdaoud

311 silver badge4 bronze badges

Понравилась статья? Поделить с друзьями:
  • Error linkageerror occurred while loading main class main
  • Error link2019 c
  • Error link 2019
  • Error link 2001
  • Error line contained too many invalid tokens