Error the server has asked for the client to provide credentials get pods

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug What happened: We had setup kubernetes 1.10.1 on CoreOS with three nodes. Setup is successfull NAME STATUS ROLES AGE VERSION node1.example.com Re...

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
We had setup kubernetes 1.10.1 on CoreOS with three nodes.
Setup is successfull

NAME                STATUS    ROLES     AGE       VERSION
node1.example.com   Ready     master    19h       v1.10.1+coreos.0
node2.example.com   Ready     node      19h       v1.10.1+coreos.0
node3.example.com   Ready     node      19h       v1.10.1+coreos.0
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default            pod-nginx2-689b9cdffb-qrpjn       1/1       Running   0          16h
kube-system   calico-kube-controllers-568dfff588-zxqjj    1/1       Running   0          18h
kube-system   calico-node-2wwcg                           2/2       Running   0          18h
kube-system   calico-node-78nzn                           2/2       Running   0          18h
kube-system   calico-node-gbvkn                           2/2       Running   0          18h
kube-system   calico-policy-controller-6d568cc5f7-fx6bv   1/1       Running   0          18h
kube-system   kube-apiserver-x66dh                        1/1       Running   4          18h
kube-system   kube-controller-manager-787f887b67-q6gts    1/1       Running   0          18h
kube-system   kube-dns-79ccb5d8df-b9skr                   3/3       Running   0          18h
kube-system   kube-proxy-gb2wj                            1/1       Running   0          18h
kube-system   kube-proxy-qtxgv                            1/1       Running   0          18h
kube-system   kube-proxy-v7wnf                            1/1       Running   0          18h
kube-system   kube-scheduler-68d5b648c-54925              1/1       Running   0          18h
kube-system   pod-checkpointer-vpvg5                      1/1       Running   0          18h

But when i tries to see the logs of any pods kubectl gives the following error:

kubectl logs -f pod-nginx2-689b9cdffb-qrpjn
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))

And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:

kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash
error: unable to upgrade connection: Unauthorized

What you expected to happen:

1. It will display the logs of the pods
2. We can do exec for the pods

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.4.0
VERSION_ID=1576.4.0
BUILD_ID=2017-12-06-0449
PRETTY_NAME="Container Linux by CoreOS 1576.4.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
  • Kernel (e.g. uname -a):

Linux node1.example.com 4.13.16-coreos-r2 #1 SMP Wed Dec 6 04:27:34 UTC 2017 x86_64 Intel(R) Xeon(R) CPU L5640 @ 2.27GHz GenuineIntel GNU/Linux

  • Install tools:
  1. Kubelet
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid 
  --volume=resolv,kind=host,source=/etc/resolv.conf 
  --mount volume=resolv,target=/etc/resolv.conf 
  --volume var-lib-cni,kind=host,source=/var/lib/cni 
  --mount volume=var-lib-cni,target=/var/lib/cni 
  --volume var-log,kind=host,source=/var/log 
  --mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper 
  --kubeconfig=/etc/kubernetes/kubeconfig 
  --config=/etc/kubernetes/config 
  --cni-conf-dir=/etc/kubernetes/cni/net.d 
  --network-plugin=cni 
  --allow-privileged 
  --lock-file=/var/run/lock/kubelet.lock 
  --exit-on-lock-contention 
  --hostname-override=node1.example.com 
  --node-labels=node-role.kubernetes.io/master 
  --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
  1. KubeletConfig
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"

We have also specified «—kubelet-client-certificate» and «—kubelet-client-key» flags into kube-apiserver.yaml files:

- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key

So what we are missing here?
Thanks in advance :)

My k8s version v1.17.13
My certificate expired today , so I ran
kubeadm alpha certs renew all
systemctl restart kubelet
on all my master servers.
All the kubectl commands that I ran worked fine .. like
kubectl get nodes , kubectl scale , kubectl describe …
However , running kubectl logs gives the following error
error: You must be logged in to the server (the server has asked for the client to provide credentials

Any idea why …
I believe my ~/.kube/config is ok because I am able to run other kubectl commands. I deleted the kube-apiserver to force to restart .. but still same issue ..

May you please help me with this issue.
Thanks,

asked Jan 9, 2021 at 5:09

bdaoud's user avatar

0

While looking around … I saw this on my worker nodes
Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid

After further troubleshooting … only 2 of my 3 masters were causing this error
kubectl logs error: You must be logged in to the server (the server has asked for the client to provide credentials

After checking a lot of resources ,, I really couldn’t find what is causing the problem , so I decided to reboot each of the 2 failing masters one at a time and that did the trick. I guess some of the pods in kube-system required restarting.

Additionally , I restarted kubelet on all worker nodes , but not sure if this had an effect or not.

Note that in https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal there is no mention about rebooting the masters

A final note .. I am not sure why the cert renew was not as smooth ..
Before running into the kubectl logs problem described in this post …
I ran into this error on my first master («bootstrap-kubelet.conf does not exist» issue which would not allow kubelet to restart) so I had to follow fix in https://stackoverflow.com/questions/56320930/renew-kubernetes-pki-after-expired

answered Jan 9, 2021 at 14:54

bdaoud's user avatar

bdaoudbdaoud

311 silver badge4 bronze badges

Question:

I’m trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:

Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource "helm_release" "nginx":
resource "helm_release" "nginx" {

This error repeats for metrics_server, lb_ingress, argocd, but cluster-autoscaler throws:

Warning: Helm release "cluster-autoscaler" was created but has a failed status.
with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource "helm_release" "cluster_autoscaler":
resource "helm_release" "cluster_autoscaler" {

My main.tf looks like this:

terraform {

  backend "remote" {}

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.66.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.7.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

provider "aws" {
  access_key = "xxx"
  secret_key = "xxx"
  region     = "xxx"
  assume_role {
    role_arn = "xxx"
  }
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    token                  = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  }
}

My eks.tf looks like this:

module "eks-ssp" {
    source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"

    # EKS CLUSTER
    tenant            = "DevOpsLabs2b"
    environment       = "dev-test"
    zone              = ""
    terraform_version = "Terraform v1.1.4"

    # EKS Cluster VPC and Subnet mandatory config
    vpc_id             = "xxx"
    private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]

    # EKS CONTROL PLANE VARIABLES
    create_eks         = true
    kubernetes_version = "1.19"

  # EKS SELF MANAGED NODE GROUPS
    self_managed_node_groups = {
    self_mg = {
      node_group_name        = "DevOpsLabs2b"
      subnet_ids             = ["xxx","xxx", "xxx", "xxx"]
      create_launch_template = true
      launch_template_os     = "bottlerocket"       # amazonlinux2eks  or bottlerocket or windows
      custom_ami_id          = "xxx"
      public_ip              = true                   # Enable only for public subnets
      pre_userdata           = <<-EOT
            yum install -y amazon-ssm-agent 
            systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent 
        EOT

      disk_size     = 10
      instance_type = "t2.small"
      desired_size  = 2
      max_size      = 10
      min_size      = 0
      capacity_type = "" # Optional Use this only for SPOT capacity as  capacity_type = "spot"

      k8s_labels = {
        Environment = "dev-test"
        Zone        = ""
        WorkerType  = "SELF_MANAGED_ON_DEMAND"
      }

      additional_tags = {
        ExtraTag    = "t2x-on-demand"
        Name        = "t2x-on-demand"
        subnet_type = "public"
      }
      create_worker_security_group = false # Creates a dedicated sec group for this Node Group
    },
  }
}

    enable_amazon_eks_vpc_cni             = true
        amazon_eks_vpc_cni_config = {
        addon_name               = "vpc-cni"
        addon_version            = "v1.7.5-eksbuild.2"
        service_account          = "aws-node"
        resolve_conflicts        = "OVERWRITE"
        namespace                = "kube-system"
        additional_iam_policies  = []
        service_account_role_arn = ""
        tags                     = {}
    }
    enable_amazon_eks_kube_proxy          = true
        amazon_eks_kube_proxy_config = {
        addon_name               = "kube-proxy"
        addon_version            = "v1.19.8-eksbuild.1"
        service_account          = "kube-proxy"
        resolve_conflicts        = "OVERWRITE"
        namespace                = "kube-system"
        additional_iam_policies  = []
        service_account_role_arn = ""
        tags                     = {}
    }

    #K8s Add-ons
    enable_aws_load_balancer_controller   = true
    enable_metrics_server                 = true
    enable_cluster_autoscaler             = true
    enable_aws_for_fluentbit              = true
    enable_argocd                         = true
    enable_ingress_nginx                  = true

    depends_on = [module.eks-ssp.self_managed_node_groups]
}

Answer:

OP has confirmed in the comment that the problem was resolved:

Of course. I think I found the issue. Doing «kubectl get svc» throws: «An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxx:user/terraform_deploy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxx:user/terraform_deploy»
Solved it by using my actual role, that’s crazy. No idea why it was calling itself.

For similar problem look also this issue.

If you have better answer, please add a comment about this, thank you!

Source: Stackoverflow.com

We had setup kubernetes 1.10.1 on CoreOS with three nodes.
Setup is successfull

NAME                STATUS    ROLES     AGE       VERSION
node1.example.com   Ready     master    19h       v1.10.1+coreos.0
node2.example.com   Ready     node      19h       v1.10.1+coreos.0
node3.example.com   Ready     node      19h       v1.10.1+coreos.0

NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default            pod-nginx2-689b9cdffb-qrpjn       1/1       Running   0          16h
kube-system   calico-kube-controllers-568dfff588-zxqjj    1/1       Running   0          18h
kube-system   calico-node-2wwcg                           2/2       Running   0          18h
kube-system   calico-node-78nzn                           2/2       Running   0          18h
kube-system   calico-node-gbvkn                           2/2       Running   0          18h
kube-system   calico-policy-controller-6d568cc5f7-fx6bv   1/1       Running   0          18h
kube-system   kube-apiserver-x66dh                        1/1       Running   4          18h
kube-system   kube-controller-manager-787f887b67-q6gts    1/1       Running   0          18h
kube-system   kube-dns-79ccb5d8df-b9skr                   3/3       Running   0          18h
kube-system   kube-proxy-gb2wj                            1/1       Running   0          18h
kube-system   kube-proxy-qtxgv                            1/1       Running   0          18h
kube-system   kube-proxy-v7wnf                            1/1       Running   0          18h
kube-system   kube-scheduler-68d5b648c-54925              1/1       Running   0          18h
kube-system   pod-checkpointer-vpvg5                      1/1       Running   0          18h

But when i tries to see the logs of any pods kubectl gives the following error:

kubectl logs -f pod-nginx2-689b9cdffb-qrpjn error: You must be logged
in to the server (the server has asked for the client to provide
credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))

And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:

kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash error: unable to
upgrade connection: Unauthorized

Kubelet Service File :

Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid 
  --volume=resolv,kind=host,source=/etc/resolv.conf 
  --mount volume=resolv,target=/etc/resolv.conf 
  --volume var-lib-cni,kind=host,source=/var/lib/cni 
  --mount volume=var-lib-cni,target=/var/lib/cni 
  --volume var-log,kind=host,source=/var/log 
  --mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper 
  --kubeconfig=/etc/kubernetes/kubeconfig 
  --config=/etc/kubernetes/config 
  --cni-conf-dir=/etc/kubernetes/cni/net.d 
  --network-plugin=cni 
  --allow-privileged 
  --lock-file=/var/run/lock/kubelet.lock 
  --exit-on-lock-contention 
  --hostname-override=node1.example.com 
  --node-labels=node-role.kubernetes.io/master 
  --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

KubeletConfiguration File

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"

We have also specified “–kubelet-client-certificate” and “–kubelet-client-key” flags into kube-apiserver.yaml files:

- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key

So what we are missing here?
Thanks in advance 🙂

Looks like you misconfigured kublet:

You missed the --client-ca-file flag in your Kubelet Service File

That’s why you can get some general information from master, but can’t get access to nodes.

This flag is responsible for certificate; without this flag, you can not get access to the nodes.

2 answers to this question.

Related Questions In Kubernetes

  • All categories

  • ChatGPT
    (4)

  • Apache Kafka
    (84)

  • Apache Spark
    (596)

  • Azure
    (131)

  • Big Data Hadoop
    (1,907)

  • Blockchain
    (1,673)

  • C#
    (141)

  • C++
    (271)

  • Career Counselling
    (1,060)

  • Cloud Computing
    (3,446)

  • Cyber Security & Ethical Hacking
    (147)

  • Data Analytics
    (1,266)

  • Database
    (855)

  • Data Science
    (75)

  • DevOps & Agile
    (3,575)

  • Digital Marketing
    (111)

  • Events & Trending Topics
    (28)

  • IoT (Internet of Things)
    (387)

  • Java
    (1,247)

  • Kotlin
    (8)

  • Linux Administration
    (389)

  • Machine Learning
    (337)

  • MicroStrategy
    (6)

  • PMP
    (423)

  • Power BI
    (516)

  • Python
    (3,188)

  • RPA
    (650)

  • SalesForce
    (92)

  • Selenium
    (1,569)

  • Software Testing
    (56)

  • Tableau
    (608)

  • Talend
    (73)

  • TypeSript
    (124)

  • Web Development
    (3,002)

  • Ask us Anything!
    (66)

  • Others
    (1,929)

  • Mobile Development
    (263)

Subscribe to our Newsletter, and get personalized recommendations.

Already have an account? Sign in.

Bug Description

	request := client.Get().
		Namespace(namespace).
		Resource("pods").
		SubResource("proxy").
		Name(fmt.Sprintf("%s:%d", "istiod-745bd859b5-tg9rn", 8080)).
		Suffix("/debug/syncz")
        request.Timeout(3 * time.Second)
        request.DoRaw(ctx)
https://192.168.6.70:6443/api/v1/namespaces/istio-system/pods/istiod-745bd859b5-tg9rn:8080/proxy/debug/syncz?timeout=3s

query return error: the server has asked for the client to provide credentials (get pods istiod-745bd859b5-tg9rn:8080)

prior to 1.11.0, there was no problem, 1.11.3 and 1.11.4 had the same problem

there are no errors in the query in the container

istio-proxy@istiod-745bd859b5-tg9rn:/$ curl localhost:8080/debug/syncz
[
  {
    "proxy": "istio-ingressgateway-5c48f9f574-tv9hh.istio-system",
    "istio_version": "1.11.4",
    "cluster_sent": "Kys5FjN2yVo=de9871ac-b81f-4829-9ed4-157b414eaaa9",
    "cluster_acked": "Kys5FjN2yVo=de9871ac-b81f-4829-9ed4-157b414eaaa9",
    "listener_sent": "Kys5FjN2yVo=1a1b167e-2c4b-413c-881a-4476fcd60f9a",
    "listener_acked": "Kys5FjN2yVo=1a1b167e-2c4b-413c-881a-4476fcd60f9a",
    "endpoint_sent": "Kys5FjN2yVo=fbca07f4-6fb1-438c-bc6d-6b96ff9f2f89",
    "endpoint_acked": "Kys5FjN2yVo=fbca07f4-6fb1-438c-bc6d-6b96ff9f2f89"
  },
  {
    "proxy": "istio-egressgateway-564696c78-xh78x.istio-system",
    "istio_version": "1.11.4",
    "cluster_sent": "Kys5FjN2yVo=b8fe0b5b-369d-4ce5-a462-7dcfc316bbb4",
    "cluster_acked": "Kys5FjN2yVo=b8fe0b5b-369d-4ce5-a462-7dcfc316bbb4",
    "listener_sent": "Kys5FjN2yVo=3ad19aee-ab72-4221-adee-2e8b459cb8d4",
    "listener_acked": "Kys5FjN2yVo=3ad19aee-ab72-4221-adee-2e8b459cb8d4",
    "endpoint_sent": "Kys5FjN2yVo=09fd43c0-4aa5-47cc-a233-6ef9977df7c2",
    "endpoint_acked": "Kys5FjN2yVo=09fd43c0-4aa5-47cc-a233-6ef9977df7c2"
  }
]

Version

# ./istioctl version
client version: 1.11.4
control plane version: 1.11.4
data plane version: 1.11.4 (2 proxies)

Additional Information

# kubelet --version
Kubernetes v1.18.2
# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.20", GitCommit:"1f3e19b7beb1cc0110255668c4238ed63dadb7ad", GitTreeState:"clean", BuildDate:"2021-06-16T12:51:17Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Понравилась статья? Поделить с друзьями:
  • Error the secret you entered is not a valid encrypted secret
  • Error the script contains syntax errors
  • Error the response code from the recaptcha did not pass the verification перевод
  • Error the requesting app is unavailable мортал комбат мобайл
  • Error the requested url did not return json