Error execution phase kubelet start error uploading crisocket timed out waiting for the condition

Hi, I'm trying out kubeadm and I'm using the official docs in setting up a HA cluster. I have managed to create a etcd cluster but the init step on the first master node is failing with the...

Hi, I’m trying out kubeadm and I’m using the official docs in setting up a HA cluster. I have managed to create a etcd cluster but the init step on the first master node is failing with the following error:

error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

What keywords did you search in kubeadm issues before filing this one?

error execution phase upload-config/kubelet

and got:

#1382
#1227

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

root@k8s-c3-m1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:        16.04
Codename:       xenial

root@k8s-c3-m1:~# uname -a
Linux k8s-c3-m1 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

root@k8s-c3-m1:~# docker --version
Docker version 18.06.1-ce, build e68fc7a
root@k8s-c3-m1:~#

root@k8s-c3-m1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

root@k8s-c3-m1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Note: Also tryed Docker version 17.03.3-ce and 18.09.3-ce

I’m running this on vm’s in vmware.

What happened?

root@k8s-c3-m1:~# kubeadm init --config /root/kubeadmcfg.yaml -v 256
I0304 14:52:28.103162    1391 initconfiguration.go:169] loading configuration from the given file
I0304 14:52:28.107089    1391 interface.go:384] Looking for default routes with IPv4 addresses
I0304 14:52:28.107141    1391 interface.go:389] Default route transits interface "eth0"
I0304 14:52:28.107440    1391 interface.go:196] Interface eth0 is up
I0304 14:52:28.107587    1391 interface.go:244] Interface "eth0" has 1 addresses :[10.10.10.93/24].
I0304 14:52:28.107695    1391 interface.go:211] Checking addr  10.10.10.93/24.
I0304 14:52:28.107724    1391 interface.go:218] IP found 10.10.10.93
I0304 14:52:28.107759    1391 interface.go:250] Found valid IPv4 address 10.10.10.93 for interface "eth0".
I0304 14:52:28.107791    1391 interface.go:395] Found active IP 10.10.10.93 
I0304 14:52:28.107979    1391 version.go:163] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable.txt
I0304 14:52:29.493555    1391 feature_gate.go:206] feature gates: &{map[]}
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
I0304 14:52:29.494477    1391 checks.go:572] validating Kubernetes and kubeadm version
I0304 14:52:29.494609    1391 checks.go:171] validating if the firewall is enabled and active
I0304 14:52:29.506263    1391 checks.go:208] validating availability of port 6443
I0304 14:52:29.506767    1391 checks.go:208] validating availability of port 10251
I0304 14:52:29.507110    1391 checks.go:208] validating availability of port 10252
I0304 14:52:29.507454    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0304 14:52:29.507728    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0304 14:52:29.507959    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0304 14:52:29.508140    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0304 14:52:29.508316    1391 checks.go:430] validating if the connectivity type is via proxy or direct
I0304 14:52:29.508504    1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.508798    1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.509053    1391 checks.go:104] validating the container runtime
I0304 14:52:29.749661    1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:29.778962    1391 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0304 14:52:29.779324    1391 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0304 14:52:29.779573    1391 checks.go:644] validating whether swap is enabled or not
I0304 14:52:29.779818    1391 checks.go:373] validating the presence of executable ip
I0304 14:52:29.780044    1391 checks.go:373] validating the presence of executable iptables
I0304 14:52:29.780251    1391 checks.go:373] validating the presence of executable mount
I0304 14:52:29.780465    1391 checks.go:373] validating the presence of executable nsenter
I0304 14:52:29.780674    1391 checks.go:373] validating the presence of executable ebtables
I0304 14:52:29.780925    1391 checks.go:373] validating the presence of executable ethtool
I0304 14:52:29.781018    1391 checks.go:373] validating the presence of executable socat
I0304 14:52:29.781221    1391 checks.go:373] validating the presence of executable tc
I0304 14:52:29.781415    1391 checks.go:373] validating the presence of executable touch
I0304 14:52:29.781647    1391 checks.go:515] running all checks
I0304 14:52:29.838382    1391 checks.go:403] checking whether the given node name is reachable using net.LookupHost
I0304 14:52:29.838876    1391 checks.go:613] validating kubelet version
I0304 14:52:29.983771    1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:30.011507    1391 checks.go:208] validating availability of port 10250
I0304 14:52:30.011951    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/etcd/ca.crt
I0304 14:52:30.012301    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.crt
I0304 14:52:30.012360    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.key
I0304 14:52:30.012408    1391 checks.go:685] validating the external etcd version
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0304 14:52:30.238175    1391 checks.go:833] image exists: k8s.gcr.io/kube-apiserver:v1.13.4
I0304 14:52:30.378446    1391 checks.go:833] image exists: k8s.gcr.io/kube-controller-manager:v1.13.4
I0304 14:52:30.560185    1391 checks.go:833] image exists: k8s.gcr.io/kube-scheduler:v1.13.4
I0304 14:52:30.745876    1391 checks.go:833] image exists: k8s.gcr.io/kube-proxy:v1.13.4
I0304 14:52:30.930200    1391 checks.go:833] image exists: k8s.gcr.io/pause:3.1
I0304 14:52:31.096902    1391 checks.go:833] image exists: k8s.gcr.io/coredns:1.2.6
I0304 14:52:31.097108    1391 kubelet.go:71] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0304 14:52:31.256217    1391 kubelet.go:89] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0304 14:52:31.530165    1391 certs.go:113] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-c3-m1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.93 10.10.10.76 127.0.0.1 10.10.10.90 10.10.10.91 10.10.10.92 10.10.10.76]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Using existing etcd/ca keyless certificate authority[certs] External etcd mode: Skipping etcd/server certificate authority generation
[certs] External etcd mode: Skipping etcd/peer certificate authority generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0304 14:52:33.267470    1391 certs.go:113] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0304 14:52:33.995630    1391 certs.go:72] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0304 14:52:34.708619    1391 kubeconfig.go:92] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0304 14:52:35.249743    1391 kubeconfig.go:92] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0304 14:52:35.798270    1391 kubeconfig.go:92] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0304 14:52:36.159920    1391 kubeconfig.go:92] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0304 14:52:36.689060    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.701499    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0304 14:52:36.701545    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.703214    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0304 14:52:36.703259    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.704327    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0304 14:52:36.704356    1391 etcd.go:97] [etcd] External etcd mode. Skipping the creation of a manifest for local etcd
I0304 14:52:36.704377    1391 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0304 14:52:36.705892    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0304 14:52:36.707216    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:36.711008    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 3 milliseconds
I0304 14:52:36.711030    1391 round_trippers.go:444] Response Headers:
I0304 14:52:36.711077    1391 request.go:779] Got a Retry-After 1s response for attempt 1 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:37.711365    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:37.715841    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 4 milliseconds
I0304 14:52:37.715880    1391 round_trippers.go:444] Response Headers:
I0304 14:52:37.715930    1391 request.go:779] Got a Retry-After 1s response for attempt 2 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:38.716182    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:38.717826    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:38.717850    1391 round_trippers.go:444] Response Headers:
I0304 14:52:38.717897    1391 request.go:779] Got a Retry-After 1s response for attempt 3 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:39.718135    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:39.719946    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:39.719972    1391 round_trippers.go:444] Response Headers:
I0304 14:52:39.720022    1391 request.go:779] Got a Retry-After 1s response for attempt 4 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:40.720273    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:40.722069    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:40.722093    1391 round_trippers.go:444] Response Headers:
I0304 14:52:40.722136    1391 request.go:779] Got a Retry-After 1s response for attempt 5 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:41.722440    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:41.724033    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:41.724058    1391 round_trippers.go:444] Response Headers:
I0304 14:52:41.724103    1391 request.go:779] Got a Retry-After 1s response for attempt 6 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:42.724350    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:52.725613    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 10001 milliseconds
I0304 14:52:52.725683    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.226097    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.720051    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 493 milliseconds
I0304 14:52:53.720090    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.720103    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:53.720115    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:53.720125    1391 round_trippers.go:447]     Content-Length: 879
I0304 14:52:53.720135    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.720197    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
healthz check failed
I0304 14:52:53.726022    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.739616    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 13 milliseconds
I0304 14:52:53.739690    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.739705    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:53.739717    1391 round_trippers.go:447]     Content-Length: 858
I0304 14:52:53.740058    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.740083    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:53.740342    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.226068    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.232126    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 6 milliseconds
I0304 14:52:54.232149    1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.232161    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:54.232172    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:54.232182    1391 round_trippers.go:447]     Content-Length: 816
I0304 14:52:54.232192    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.232234    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.726154    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.734050    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 7 milliseconds
I0304 14:52:54.734091    1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.734111    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.734129    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:54.734146    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:54.734163    1391 round_trippers.go:447]     Content-Length: 774
I0304 14:52:54.734250    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.226158    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.231693    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 5 milliseconds
I0304 14:52:55.231734    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.231754    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.231772    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:55.231789    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:55.231805    1391 round_trippers.go:447]     Content-Length: 774
I0304 14:52:55.231998    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.726404    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.733705    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 200 OK in 7 milliseconds
I0304 14:52:55.733746    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.733766    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:55.733792    1391 round_trippers.go:447]     Content-Length: 2
I0304 14:52:55.733809    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.733888    1391 request.go:942] Response Body: ok
[apiclient] All control plane components are healthy after 19.026898 seconds
I0304 14:52:55.736342    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:55.738400    1391 uploadconfig.go:114] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0304 14:52:55.741686    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.751480    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.751978    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.752324    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.752367    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.752586    1391 round_trippers.go:447]     Content-Length: 1423
I0304 14:52:55.752696    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:n  certSANs:n  - 127.0.0.1n  - 10.10.10.90n  - 10.10.10.91n  - 10.10.10.92n  - 10.10.10.76n  extraArgs:n    authorization-mode: Node,RBACn  timeoutForControlPlane: 4m0snapiVersion: kubeadm.k8s.io/v1beta1ncertificatesDir: /etc/kubernetes/pkinclusterName: kubernetesncontrolPlaneEndpoint: 10.10.10.76:6443ncontrollerManager: {}ndns:n  type: CoreDNSnetcd:n  external:n    caFile: /etc/kubernetes/pki/etcd/ca.crtn    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crtn    endpoints:n    - https://10.10.10.90:2379n    - https://10.10.10.91:2379n    - https://10.10.10.92:2379n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.keynimageRepository: k8s.gcr.ionkind: ClusterConfigurationnkubernetesVersion: v1.13.4nnetworking:n  dnsDomain: cluster.localn  podSubnet: ""n  serviceSubnet: 10.96.0.0/12nscheduler: {}n","ClusterStatus":"apiEndpoints:n  k8s-c3-m1:n    advertiseAddress: 10.10.10.93n    bindPort: 6443n  k8s-c3-m2:n    advertiseAddress: 10.10.10.94n    bindPort: 6443napiVersion: kubeadm.k8s.io/v1beta1nkind: ClusterStatusn"}}
I0304 14:52:55.756813    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:n  certSANs:n  - 127.0.0.1n  - 10.10.10.90n  - 10.10.10.91n  - 10.10.10.92n  - 10.10.10.76n  extraArgs:n    authorization-mode: Node,RBACn  timeoutForControlPlane: 4m0snapiVersion: kubeadm.k8s.io/v1beta1ncertificatesDir: /etc/kubernetes/pkinclusterName: kubernetesncontrolPlaneEndpoint: 10.10.10.76:6443ncontrollerManager: {}ndns:n  type: CoreDNSnetcd:n  external:n    caFile: /etc/kubernetes/pki/etcd/ca.crtn    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crtn    endpoints:n    - https://10.10.10.90:2379n    - https://10.10.10.91:2379n    - https://10.10.10.92:2379n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.keynimageRepository: k8s.gcr.ionkind: ClusterConfigurationnkubernetesVersion: v1.13.4nnetworking:n  dnsDomain: cluster.localn  podSubnet: ""n  serviceSubnet: 10.96.0.0/12nscheduler: {}n","ClusterStatus":"apiEndpoints:n  k8s-c3-m1:n    advertiseAddress: 10.10.10.93n    bindPort: 6443n  k8s-c3-m2:n    advertiseAddress: 10.10.10.94n    bindPort: 6443napiVersion: kubeadm.k8s.io/v1beta1nkind: ClusterStatusn"}}
I0304 14:52:55.757443    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Content-Type: application/json" -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:55.913083    1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 155 milliseconds
I0304 14:52:55.913243    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.913271    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.913290    1391 round_trippers.go:447]     Content-Length: 218
I0304 14:52:55.913335    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.913438    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps "kubeadm-config" already exists","reason":"AlreadyExists","details":{"name":"kubeadm-config","kind":"configmaps"},"code":409}
I0304 14:52:55.914863    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:n  certSANs:n  - 127.0.0.1n  - 10.10.10.90n  - 10.10.10.91n  - 10.10.10.92n  - 10.10.10.76n  extraArgs:n    authorization-mode: Node,RBACn  timeoutForControlPlane: 4m0snapiVersion: kubeadm.k8s.io/v1beta1ncertificatesDir: /etc/kubernetes/pkinclusterName: kubernetesncontrolPlaneEndpoint: 10.10.10.76:6443ncontrollerManager: {}ndns:n  type: CoreDNSnetcd:n  external:n    caFile: /etc/kubernetes/pki/etcd/ca.crtn    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crtn    endpoints:n    - https://10.10.10.90:2379n    - https://10.10.10.91:2379n    - https://10.10.10.92:2379n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.keynimageRepository: k8s.gcr.ionkind: ClusterConfigurationnkubernetesVersion: v1.13.4nnetworking:n  dnsDomain: cluster.localn  podSubnet: ""n  serviceSubnet: 10.96.0.0/12nscheduler: {}n","ClusterStatus":"apiEndpoints:n  k8s-c3-m1:n    advertiseAddress: 10.10.10.93n    bindPort: 6443n  k8s-c3-m2:n    advertiseAddress: 10.10.10.94n    bindPort: 6443napiVersion: kubeadm.k8s.io/v1beta1nkind: ClusterStatusn"}}
I0304 14:52:55.915123    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.923538    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds
I0304 14:52:55.924120    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.924437    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.924810    1391 round_trippers.go:447]     Content-Length: 1423
I0304 14:52:55.925107    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.925521    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:n  certSANs:n  - 127.0.0.1n  - 10.10.10.90n  - 10.10.10.91n  - 10.10.10.92n  - 10.10.10.76n  extraArgs:n    authorization-mode: Node,RBACn  timeoutForControlPlane: 4m0snapiVersion: kubeadm.k8s.io/v1beta1ncertificatesDir: /etc/kubernetes/pkinclusterName: kubernetesncontrolPlaneEndpoint: 10.10.10.76:6443ncontrollerManager: {}ndns:n  type: CoreDNSnetcd:n  external:n    caFile: /etc/kubernetes/pki/etcd/ca.crtn    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crtn    endpoints:n    - https://10.10.10.90:2379n    - https://10.10.10.91:2379n    - https://10.10.10.92:2379n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.keynimageRepository: k8s.gcr.ionkind: ClusterConfigurationnkubernetesVersion: v1.13.4nnetworking:n  dnsDomain: cluster.localn  podSubnet: ""n  serviceSubnet: 10.96.0.0/12nscheduler: {}n","ClusterStatus":"apiEndpoints:n  k8s-c3-m1:n    advertiseAddress: 10.10.10.93n    bindPort: 6443n  k8s-c3-m2:n    advertiseAddress: 10.10.10.94n    bindPort: 6443napiVersion: kubeadm.k8s.io/v1beta1nkind: ClusterStatusn"}}
I0304 14:52:55.926346    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.926823    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:55.946643    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 19 milliseconds
I0304 14:52:55.947026    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.947441    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.947798    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.948105    1391 round_trippers.go:447]     Content-Length: 298
I0304 14:52:55.948447    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:55.949132    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.949653    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.960370    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config 200 OK in 10 milliseconds
I0304 14:52:55.960920    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.961216    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.961507    1391 round_trippers.go:447]     Content-Length: 464
I0304 14:52:55.961789    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.962002    1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Anodes-kubeadm-config","uid":"51a356c9-3e69-11e9-8dd7-0050569c544c","resourceVersion":"559","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.964418    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.965022    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:55.983782    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 18 milliseconds
I0304 14:52:55.983847    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.983890    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.983920    1391 round_trippers.go:447]     Content-Length: 312
I0304 14:52:55.983948    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.984007    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:55.984330    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.984464    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.994138    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.994193    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.994497    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.994878    1391 round_trippers.go:447]     Content-Length: 678
I0304 14:52:55.995094    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.995377    1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Anodes-kubeadm-config","uid":"51a61bf8-3e69-11e9-8dd7-0050569c544c","resourceVersion":"560","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:56.001421    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:56.002891    1391 uploadconfig.go:128] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
I0304 14:52:56.005261    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0napiVersion: kubelet.config.k8s.io/v1beta1nauthentication:n  anonymous:n    enabled: falsen  webhook:n    cacheTTL: 2m0sn    enabled: truen  x509:n    clientCAFile: /etc/kubernetes/pki/ca.crtnauthorization:n  mode: Webhookn  webhook:n    cacheAuthorizedTTL: 5m0sn    cacheUnauthorizedTTL: 30sncgroupDriver: cgroupfsncgroupsPerQOS: truenclusterDNS:n- 10.96.0.10nclusterDomain: cluster.localnconfigMapAndSecretChangeDetectionStrategy: WatchncontainerLogMaxFiles: 5ncontainerLogMaxSize: 10MincontentType: application/vnd.kubernetes.protobufncpuCFSQuota: truencpuCFSQuotaPeriod: 100msncpuManagerPolicy: nonencpuManagerReconcilePeriod: 10snenableControllerAttachDetach: truenenableDebuggingHandlers: truenenforceNodeAllocatable:n- podsneventBurst: 10neventRecordQPS: 5nevictionHard:n  imagefs.available: 15%n  memory.available: 100Min  nodefs.available: 10%n  nodefs.inodesFree: 5%nevictionPressureTransitionPeriod: 5m0snfailSwapOn: truenfileCheckFrequency: 20snhairpinMode: promiscuous-bridgenhealthzBindAddress: 127.0.0.1nhealthzPort: 10248nhttpCheckFrequency: 20snimageGCHighThresholdPercent: 85nimageGCLowThresholdPercent: 80nimageMinimumGCAge: 2m0sniptablesDropBit: 15niptablesMasqueradeBit: 14nkind: KubeletConfigurationnkubeAPIBurst: 10nkubeAPIQPS: 5nmakeIPTablesUtilChains: truenmaxOpenFiles: 1000000nmaxPods: 110nnodeLeaseDurationSeconds: 40nnodeStatusReportFrequency: 1m0snnodeStatusUpdateFrequency: 10snoomScoreAdj: -999npodPidsLimit: -1nport: 10250nregistryBurst: 10nregistryPullQPS: 5nresolvConf: /etc/resolv.confnrotateCertificates: truenruntimeRequestTimeout: 2m0snserializeImagePulls: truenstaticPodPath: /etc/kubernetes/manifestsnstreamingConnectionIdleTimeout: 4h0m0snsyncFrequency: 1m0snvolumeStatsAggPeriod: 1m0sn"}}
I0304 14:52:56.005580    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:56.026664    1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 20 milliseconds
I0304 14:52:56.026763    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.026798    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.026852    1391 round_trippers.go:447]     Content-Length: 228
I0304 14:52:56.026931    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.027084    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps "kubelet-config-1.13" already exists","reason":"AlreadyExists","details":{"name":"kubelet-config-1.13","kind":"configmaps"},"code":409}
I0304 14:52:56.027551    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0napiVersion: kubelet.config.k8s.io/v1beta1nauthentication:n  anonymous:n    enabled: falsen  webhook:n    cacheTTL: 2m0sn    enabled: truen  x509:n    clientCAFile: /etc/kubernetes/pki/ca.crtnauthorization:n  mode: Webhookn  webhook:n    cacheAuthorizedTTL: 5m0sn    cacheUnauthorizedTTL: 30sncgroupDriver: cgroupfsncgroupsPerQOS: truenclusterDNS:n- 10.96.0.10nclusterDomain: cluster.localnconfigMapAndSecretChangeDetectionStrategy: WatchncontainerLogMaxFiles: 5ncontainerLogMaxSize: 10MincontentType: application/vnd.kubernetes.protobufncpuCFSQuota: truencpuCFSQuotaPeriod: 100msncpuManagerPolicy: nonencpuManagerReconcilePeriod: 10snenableControllerAttachDetach: truenenableDebuggingHandlers: truenenforceNodeAllocatable:n- podsneventBurst: 10neventRecordQPS: 5nevictionHard:n  imagefs.available: 15%n  memory.available: 100Min  nodefs.available: 10%n  nodefs.inodesFree: 5%nevictionPressureTransitionPeriod: 5m0snfailSwapOn: truenfileCheckFrequency: 20snhairpinMode: promiscuous-bridgenhealthzBindAddress: 127.0.0.1nhealthzPort: 10248nhttpCheckFrequency: 20snimageGCHighThresholdPercent: 85nimageGCLowThresholdPercent: 80nimageMinimumGCAge: 2m0sniptablesDropBit: 15niptablesMasqueradeBit: 14nkind: KubeletConfigurationnkubeAPIBurst: 10nkubeAPIQPS: 5nmakeIPTablesUtilChains: truenmaxOpenFiles: 1000000nmaxPods: 110nnodeLeaseDurationSeconds: 40nnodeStatusReportFrequency: 1m0snnodeStatusUpdateFrequency: 10snoomScoreAdj: -999npodPidsLimit: -1nport: 10250nregistryBurst: 10nregistryPullQPS: 5nresolvConf: /etc/resolv.confnrotateCertificates: truenruntimeRequestTimeout: 2m0snserializeImagePulls: truenstaticPodPath: /etc/kubernetes/manifestsnstreamingConnectionIdleTimeout: 4h0m0snsyncFrequency: 1m0snvolumeStatsAggPeriod: 1m0sn"}}
I0304 14:52:56.027830    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13'
I0304 14:52:56.036853    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 8 milliseconds
I0304 14:52:56.036900    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.037253    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.037291    1391 round_trippers.go:447]     Content-Length: 2133
I0304 14:52:56.037554    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.037755    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13","uid":"51a9de57-3e69-11e9-8dd7-0050569c544c","resourceVersion":"561","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"kubelet":"address: 0.0.0.0napiVersion: kubelet.config.k8s.io/v1beta1nauthentication:n  anonymous:n    enabled: falsen  webhook:n    cacheTTL: 2m0sn    enabled: truen  x509:n    clientCAFile: /etc/kubernetes/pki/ca.crtnauthorization:n  mode: Webhookn  webhook:n    cacheAuthorizedTTL: 5m0sn    cacheUnauthorizedTTL: 30sncgroupDriver: cgroupfsncgroupsPerQOS: truenclusterDNS:n- 10.96.0.10nclusterDomain: cluster.localnconfigMapAndSecretChangeDetectionStrategy: WatchncontainerLogMaxFiles: 5ncontainerLogMaxSize: 10MincontentType: application/vnd.kubernetes.protobufncpuCFSQuota: truencpuCFSQuotaPeriod: 100msncpuManagerPolicy: nonencpuManagerReconcilePeriod: 10snenableControllerAttachDetach: truenenableDebuggingHandlers: truenenforceNodeAllocatable:n- podsneventBurst: 10neventRecordQPS: 5nevictionHard:n  imagefs.available: 15%n  memory.available: 100Min  nodefs.available: 10%n  nodefs.inodesFree: 5%nevictionPressureTransitionPeriod: 5m0snfailSwapOn: truenfileCheckFrequency: 20snhairpinMode: promiscuous-bridgenhealthzBindAddress: 127.0.0.1nhealthzPort: 10248nhttpCheckFrequency: 20snimageGCHighThresholdPercent: 85nimageGCLowThresholdPercent: 80nimageMinimumGCAge: 2m0sniptablesDropBit: 15niptablesMasqueradeBit: 14nkind: KubeletConfigurationnkubeAPIBurst: 10nkubeAPIQPS: 5nmakeIPTablesUtilChains: truenmaxOpenFiles: 1000000nmaxPods: 110nnodeLeaseDurationSeconds: 40nnodeStatusReportFrequency: 1m0snnodeStatusUpdateFrequency: 10snoomScoreAdj: -999npodPidsLimit: -1nport: 10250nregistryBurst: 10nregistryPullQPS: 5nresolvConf: /etc/resolv.confnrotateCertificates: truenruntimeRequestTimeout: 2m0snserializeImagePulls: truenstaticPodPath: /etc/kubernetes/manifestsnstreamingConnectionIdleTimeout: 4h0m0snsyncFrequency: 1m0snvolumeStatsAggPeriod: 1m0sn"}}
I0304 14:52:56.038255    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.038523    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:56.052414    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 13 milliseconds
I0304 14:52:56.052512    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.052572    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.052603    1391 round_trippers.go:447]     Content-Length: 296
I0304 14:52:56.052685    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.052955    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io "kubeadm:kubelet-config-1.13" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:56.053398    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.053646    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13'
I0304 14:52:56.061599    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13 200 OK in 7 milliseconds
I0304 14:52:56.061691    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.061723    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.061779    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.061808    1391 round_trippers.go:447]     Content-Length: 467
I0304 14:52:56.061917    1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Akubelet-config-1.13","uid":"51abee39-3e69-11e9-8dd7-0050569c544c","resourceVersion":"562","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.062370    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.062564    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:56.076620    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 13 milliseconds
I0304 14:52:56.076664    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.076902    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.076938    1391 round_trippers.go:447]     Content-Length: 310
I0304 14:52:56.077092    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.077299    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io "kubeadm:kubelet-config-1.13" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:56.077657    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.077940    1391 round_trippers.go:419] curl -k -v -XPUT  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" -H "Content-Type: application/json" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13'
I0304 14:52:56.084893    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13 200 OK in 6 milliseconds
I0304 14:52:56.084937    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.085395    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.085635    1391 round_trippers.go:447]     Content-Length: 675
I0304 14:52:56.085675    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.086357    1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Akubelet-config-1.13","uid":"51ad932c-3e69-11e9-8dd7-0050569c544c","resourceVersion":"563","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.086694    1391 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-c3-m1" as an annotation
[excluded similar lines] ...
I0304 14:53:16.587525    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:53:16.597510    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 9 milliseconds
I0304 14:53:16.597872    1391 round_trippers.go:444] Response Headers:
I0304 14:53:16.597909    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:53:16.598117    1391 round_trippers.go:447]     Content-Length: 188
I0304 14:53:16.598141    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:53:16 GMT
I0304 14:53:16.598332    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "k8s-c3-m1" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
[kubelet-check] Initial timeout of 40s passed.
[excluded similar lines] ...
I0304 14:53:17.111508    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "k8s-c3-m1" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
I0304 14:54:56.095649    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:54:56.101815    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 6 milliseconds
I0304 14:54:56.101895    1391 round_trippers.go:444] Response Headers:
I0304 14:54:56.101926    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:54:56.101945    1391 round_trippers.go:447]     Content-Length: 188
I0304 14:54:56.101996    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:54:56 GMT
I0304 14:54:56.102074    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes "k8s-c3-m1" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

What you expected to happen?

Master to initialize without issue.

How to reproduce it (as minimally and precisely as possible)?

node preperation

# note! - docker needs to be installed on all nodes (it is on my my 16.04 template VMs)

# install misc tools
apt-get update && apt-get install -y apt-transport-https curl

# install required k8s tools
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

# turn off swap
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

# create systemd config for kubelet
cat << _EOF_ > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
Restart=always
_EOF_

# reload systemd and start kublet
systemctl daemon-reload
systemctl restart kubelet

config creation for kublet and etcd

### on all etcd nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')

# cerate clusterConfig for etcd
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "${ETCDINFO[$HOSTNAME]}"
        peerCertSANs:
        - "${ETCDINFO[$HOSTNAME]}"
        extraArgs:
            initial-cluster: ${ETCDNAMES[0]}=https://${ETCDIPS[0]}:2380,${ETCDNAMES[1]}=https://${ETCDIPS[1]}:2380,${ETCDNAMES[2]}=https://${ETCDIPS[2]}:2380
            initial-cluster-state: new
            name: ${HOSTNAME}
            listen-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
            listen-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
            advertise-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
            initial-advertise-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
EOF

generate and distribute certs

### run only on one etcd node (k8s-c3-e1)
# generate the main certificate authority (creates two files in /etc/kubernetes/pki/etcd/)
kubeadm init phase certs etcd-ca

# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml 
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml

# copy cert files from k8s-c3-e1 to the other etcd nodes
scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[1]}: && 
ssh -t ubuntu@${ETCDIPS[1]} "sudo mv pki /etc/kubernetes/ && 
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[2]}: && 
ssh -t ubuntu@${ETCDIPS[2]} "sudo mv pki /etc/kubernetes/ && 
sudo chown -R root.root /etc/kubernetes/pki"

# copy cert files from k8s-c3-e1 to the master nodes
scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[0]}: && 
ssh -t ubuntu@${MASTERIPS[0]} "sudo mv pki /etc/kubernetes/ && 
sudo find /etc/kubernetes/pki -not -name ca.crt 
-not -name apiserver-etcd-client.crt 
-not -name apiserver-etcd-client.key 
-type f -delete && 
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[1]}: && 
ssh -t ubuntu@${MASTERIPS[1]} "sudo mv pki /etc/kubernetes/ && 
sudo find /etc/kubernetes/pki -not -name ca.crt 
-not -name apiserver-etcd-client.crt 
-not -name apiserver-etcd-client.key 
-type f -delete && 
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[2]}: && 
ssh -t ubuntu@${MASTERIPS[2]} "sudo mv pki /etc/kubernetes/ && 
sudo find /etc/kubernetes/pki -not -name ca.crt 
-not -name apiserver-etcd-client.crt 
-not -name apiserver-etcd-client.key 
-type f -delete && 
sudo chown -R root.root /etc/kubernetes/pki"

### run on the other etcd nodes
# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml 
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml

# create manifest for 
kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml


### run only on one etcd node (k8s-c3-e1)
# check if cluster is running
docker run --rm -it 
--net host 
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl 
--cert-file /etc/kubernetes/pki/etcd/peer.crt 
--key-file /etc/kubernetes/pki/etcd/peer.key 
--ca-file /etc/kubernetes/pki/etcd/ca.crt 
--endpoints https://${ETCDIPS[0]}:2379 cluster-health

config and init master nodes

### run on all master nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
VIP=10.10.10.76

# cerate clusterConfig for master nodes
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "127.0.0.1"
  - "${ETCDIPS[0]}"
  - "${ETCDIPS[1]}"
  - "${ETCDIPS[2]}"
  - "${VIP}"
controlPlaneEndpoint: "${VIP}:6443"
etcd:
    external:
        endpoints:
        - https://${ETCDIPS[0]}:2379
        - https://${ETCDIPS[1]}:2379
        - https://${ETCDIPS[2]}:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
EOF

### run only on the first master node (k8s-c3-m1)
# init the first master node
service kubelet stop && 
kubeadm init --config /root/kubeadmcfg.yaml

###Anything else we need to know?

cluster nodes

k8s-c3-lb — 10.10.10.76
k8s-c3-e1 — 10.10.10.90
k8s-c3-e2 — 10.10.10.91
k8s-c3-e3 — 10.10.10.92
k8s-c3-m1 — 10.10.10.93
k8s-c3-m2 — 10.10.10.94
k8s-c3-m3 — 10.10.10.95
k8s-c3-w1 — 10.10.10.96
k8s-c3-w2 — 10.10.10.97
k8s-c3-w3 — 10.10.10.98

nginx LB config

root@k8s-c3-lb:~# cat nginx.conf
worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

error_log /var/log/nginx/error.log info;

stream {
  upstream k8s-c3 {
    server 10.10.10.93:6443;
    server 10.10.10.94:6443;
    server 10.10.10.95:6443;
  }
  server {
    listen 6443;
    proxy_pass k8s-c3;
  }
}

netcat ouput to check lb

root@k8s-c3-lb:~# nc -v 10.10.10.76 6443
Connection to 10.10.10.76 6443 port [tcp/*] succeeded!

kubeadm config on etcd nodes

root@k8s-c3-e1:~# cat kubeadmcfg.yaml 
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "10.10.10.90"
        peerCertSANs:
        - "10.10.10.90"
        extraArgs:
            initial-cluster: k8s-c3-e1=https://10.10.10.90:2380,k8s-c3-e2=https://10.10.10.91:2380,k8s-c3-e3=https://10.10.10.92:2380
            initial-cluster-state: new
            name: k8s-c3-e1
            listen-peer-urls: https://10.10.10.90:2380
            listen-client-urls: https://10.10.10.90:2379
            advertise-client-urls: https://10.10.10.90:2379
            initial-advertise-peer-urls: https://10.10.10.90:2380

etcd check from master

root@k8s-c3-m1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.10.10.90:2379 cluster-health
member 2855b88ffd64a219 is healthy: got healthy result from https://10.10.10.91:2379
member 54861c1657ba1b20 is healthy: got healthy result from https://10.10.10.92:2379
member 6fc6fbb1e152a287 is healthy: got healthy result from https://10.10.10.90:2379
cluster is healthy

kubeadm config on master

root@k8s-c3-m1:~# cat /root/kubeadmcfg.yaml 
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "127.0.0.1"
  - "10.10.10.90"
  - "10.10.10.91"
  - "10.10.10.92"
  - "10.10.10.76"
controlPlaneEndpoint: "10.10.10.76:6443"
etcd:
    external:
        endpoints:
        - https://10.10.10.90:2379
        - https://10.10.10.91:2379
        - https://10.10.10.92:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

docker ps

root@k8s-c3-m1:~# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
d6e9af6f2585        dd862b749309           "kube-scheduler --ad…"   28 minutes ago      Up 28 minutes                           k8s_kube-scheduler_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_1
76bcca06bb0c        40a817357014           "kube-controller-man…"   28 minutes ago      Up 28 minutes                           k8s_kube-controller-manager_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_1
74c9b34ec00d        fc3801f0fc54           "kube-apiserver --au…"   About an hour ago   Up About an hour                        k8s_kube-apiserver_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0
e68bbbc0967e        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0
0d6e0d0040cf        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_0
29f7974ae280        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0

journalctl -xeu kubelet

root@k8s-c3-m1:~# journalctl -xeu kubelet
Mar 04 15:44:05 k8s-c3-m1 kubelet[1512]: I0304 15:44:05.502619    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:15 k8s-c3-m1 kubelet[1512]: I0304 15:44:15.582590    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:24 k8s-c3-m1 kubelet[1512]: I0304 15:44:24.567123    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:25 k8s-c3-m1 kubelet[1512]: I0304 15:44:25.622999    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:35 k8s-c3-m1 kubelet[1512]: I0304 15:44:35.669595    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:45 k8s-c3-m1 kubelet[1512]: I0304 15:44:45.742763    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:49 k8s-c3-m1 kubelet[1512]: I0304 15:44:49.566491    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:55 k8s-c3-m1 kubelet[1512]: I0304 15:44:55.812636    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:58 k8s-c3-m1 kubelet[1512]: I0304 15:44:58.566265    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:05 k8s-c3-m1 kubelet[1512]: I0304 15:45:05.890388    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:15 k8s-c3-m1 kubelet[1512]: I0304 15:45:15.971426    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:26 k8s-c3-m1 kubelet[1512]: I0304 15:45:26.043344    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.117636    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.566338    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:46 k8s-c3-m1 kubelet[1512]: I0304 15:45:46.190995    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:51 k8s-c3-m1 kubelet[1512]: I0304 15:45:51.566093    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:56 k8s-c3-m1 kubelet[1512]: I0304 15:45:56.273010    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:06 k8s-c3-m1 kubelet[1512]: I0304 15:46:06.346175    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:16 k8s-c3-m1 kubelet[1512]: I0304 15:46:16.384087    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach

systemctl

root@k8s-c3-m1:~# systemctl status kubelet          
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf, 20-etcd-service-manager.conf
   Active: active (running) since Mon 2019-03-04 14:52:31 GMT; 55min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 1512 (kubelet)
    Tasks: 17
   Memory: 42.1M
      CPU: 2min 45.226s
   CGroup: /system.slice/kubelet.service
           └─1512 /usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true

Mar 04 15:46:26 k8s-c3-m1 kubelet[1512]: I0304 15:46:26.450692    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:27 k8s-c3-m1 kubelet[1512]: I0304 15:46:27.566498    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:36 k8s-c3-m1 kubelet[1512]: I0304 15:46:36.519582    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:46 k8s-c3-m1 kubelet[1512]: I0304 15:46:46.621611    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.566111    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.568601    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.706182    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:06 k8s-c3-m1 kubelet[1512]: I0304 15:47:06.778864    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:16 k8s-c3-m1 kubelet[1512]: I0304 15:47:16.852441    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:26 k8s-c3-m1 kubelet[1512]: I0304 15:47:26.893380    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach

I am trying to create a template for a Kubernetes cluster having 1 master and 2 worker nodes. I have installed all the pre-req software and have run the kubeadmn init on my master node. But when i try to run the kubeadmn join which i get as an output of the init command i am getting an error.

[discovery] Created cluster-info discovery client, requesting info
from "https://10.31.2.33:6443" [discovery] Requesting info from
"https://10.31.2.33:6443" again to validate TLS against the pinned
public key [discovery] Cluster info signature and contents are valid
and TLS certificate validates against pinned roots, will use API
Server "10.31.2.33:6443" [discovery] Successfully established
connection with API Server "10.31.2.33:6443" [kubelet] Downloading
configuration for the kubelet from the "kubelet-config-1.12" ConfigMap
in the kube-system namespace [kubelet] Writing kubelet configuration
to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet
environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the
kubelet service [tlsbootstrap] Waiting for the kubelet to perform the
TLS Bootstrap... [patchnode] Uploading the CRI Socket information
"/var/run/dockershim.sock" to the Node API object "<workernode2>" as
an annotation error uploading crisocket: timed out waiting for the
condition```

I have done a swapoff -a before running this on the workdernode2

I was able to run the join once but after that, as a part of a script, I ran the kubeadmn reset followed by init and join few times where this has started showing up.

Not able to figure out what or where I am doing a mistake.

My main intent is to put all the commands in the form of a shell script (on master node) so that it can be run on a cluster to create a network.

Rico's user avatar

Rico

56.9k12 gold badges108 silver badges138 bronze badges

asked Nov 28, 2018 at 18:35

sierralimaa's user avatar

2

I had the encountered the following issue after node was rebooted:

[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8smaster" as an annotation
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

Steps to get rid of this issue:

  1. Check the hostname again, after reboot it might have changed.

    sudo vi /etc/hostname 
    sudo vi /etc/hosts
    
  2. Perform the following clean-up actions

    Code:

    sudo kubeadm reset
    rm -rf /var/lib/cni/
    sudo rm -rf /var/lib/cni/
    
    systemctl daemon-reload
    
    systemctl restart kubelet
    
    sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
    
  3. Execute the init action with the special tag as below

    Code:

    sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.10.10.2 --ignore-preflight-errors=all    
    

    (where 10.10.10.2 is the IP of master node and 192.168.0.0/16 is the private subnet assigned for Pods)

Rico's user avatar

Rico

56.9k12 gold badges108 silver badges138 bronze badges

answered Apr 2, 2019 at 6:15

Deb's user avatar

DebDeb

5334 silver badges10 bronze badges

9

I’ve had the same problem on Ubuntu 16.04 amd64, fixed it with these commands:

swapoff -a    # will turn off the swap 
kubeadm reset
systemctl daemon-reload
systemctl restart kubelet
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X  # will reset iptables

Also, look at that issue in kubeadm GitHub kubeadm swap on the related issue where people still report having the problem after turning swap off.

You may also try to add —fail-swap-on=false flag in /etc/default/kubelet file, it didn’t help in my case though.

It seems to be fixed in the latest k8 version because after upgrading the cluster, I haven’t experienced the issue.

ashish k's user avatar

answered Feb 5, 2019 at 18:05

Alexz's user avatar

AlexzAlexz

7011 gold badge8 silver badges19 bronze badges

1

I would recommend caution in altering iptables. This after enormous amount of troubleshooting on a 2 node Ubuntu 20.04 kubeadm cluster was down to internal node mapping that the CNI plugin is dependant on.

With sudo hostnamectl set-hostname c23996dd402c.mylabserver.com I changed the master node back to its mapped name and then did a sudo systemctl daemon-reload. I logged out and back in for good measure to ensure the reload worked. I could see the command prompt change back to the default VM node name.

The mapping now matches for hostname to private IP for the cluster:

~$ cat /etc/hosts
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
# Cloud Server Hostname mapping
172.31.xx.xxx   xxxxxxxx.mylabserver.com
woopsie@master:~$ cat /etc/hostname
xxxxxxxx.mylabserver.com


The worker1 node has the same problem so reverting to system default was the easier way for me. I just changed it sudo hostnamectl set-hostname masterdefaultnodename. IMHO, iptable alterations should be always done after careful consideration and as an escalated measure to fix the issue in my view.

Master done, onto worker1…

kubectl get nodes
NAME                           STATUS     ROLES           AGE   VERSION
masterdefaultnodename.mylabserver.com   Ready      control-plane   33d   v1.25.0
worker1defautnodname.mylabserver.com   NotReady   <none>          33d   v1.25.0

answered Jan 16 at 16:25

John Mulhall's user avatar

#kubernetes

#kubernetes

Вопрос:

При соединении с узлом : sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2

Результаты:

 [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
 

И systemctl status kubelet :

 node@node:~$ sudo systemctl status kubelet
 kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 26716 (kubelet)
    Tasks: 16 (limit: 1111)
   CGroup: /system.slice/kubelet.service
           └─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml -

Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384   26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969   26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820   26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838   26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153   26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578   26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114   26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501   26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294   26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347   26716 kubelet.go:2244] node "node" not found
 

Чтобы Unauthorized я проверил на главном kubeadm token list , токен действителен.
Так в чем проблема? Большое спасибо.

Комментарии:

1. Просто для подтверждения, узел не был добавлен в кластер, верно? Можете ли вы попытаться перезагрузить конфигурацию sudo kubeadm upgrade node config --kubeconfig /etc/kubernetes/kubelet.conf --kubelet-version v1.13.4 -v8 kubelet, просто указав свою версию kubelet

Ответ №1:

Пожалуйста, проверьте шаги до и после установки здесь:
Пожалуйста, проверьте также статус ваших служб, включенных и запущенных, docker env.

 
sudo systemctl enable docker
sudo systemctl enable kubelet
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
 

Будут ли результаты такими же, если вы запустите команду инициализации с помощью —ignore-preflight-errors=all
Для получения более подробной информации, пожалуйста, используйте также «journalctl -u kubelet»
Более подробную информацию из ваших журналов, пожалуйста, посмотрите «github — kubeadm / issues» здесь:
Пожалуйста, предоставьте более подробную информацию о вашем env, чтобы воссоздать эту проблему и поделиться своими дополнительными выводами.

Не могли бы вы, пожалуйста, выполнить еще один тест и запустить инициализацию kubeadm на вашем рабочем узле так же, как на первом узле (короче говоря, пожалуйста, создайте второй главный узел), просто чтобы проверить вашу рабочую среду.

Комментарии:

1. journalctl -u kubelet вывод команды: 03 апреля 09:43:36 узел kubelet[2947]: F0403 09:43:36.835732 2947 server.go:193] не удалось загрузить файл конфигурации Kubelet /var/lib/kubelet/config.yaml, ошибка не удалось прочитать kubelet conf, я проверил файл /var/существует lib/kubelet/config.yaml.

2. Обновлено — пожалуйста, выполните тест, описанный выше.

Question:

I am trying to create a template for a Kubernetes cluster having 1 master and 2 worker nodes. I have installed all the pre-req software and have run the kubeadmn init on my master node. But when i try to run the kubeadmn join which i get as an output of the init command i am getting an error.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

[discovery] Created cluster-info discovery client, requesting info

from «https://10.31.2.33:6443» [discovery] Requesting info from

«https://10.31.2.33:6443» again to validate TLS against the pinned

public key [discovery] Cluster info signature and contents are valid

and TLS certificate validates against pinned roots, will use API

Server «10.31.2.33:6443» [discovery] Successfully established

connection with API Server «10.31.2.33:6443» [kubelet] Downloading

configuration for the kubelet from the «kubelet-config-1.12» ConfigMap

in the kube-system namespace [kubelet] Writing kubelet configuration

to file «/var/lib/kubelet/config.yaml» [kubelet] Writing kubelet

environment file with flags to file

«/var/lib/kubelet/kubeadm-flags.env» [preflight] Activating the

kubelet service [tlsbootstrap] Waiting for the kubelet to perform the

TLS Bootstrap… [patchnode] Uploading the CRI Socket information

«/var/run/dockershim.sock» to the Node API object «

» as

an annotation error uploading crisocket: timed out waiting for the

condition«`

I have done a swapoff -a before running this on the workdernode2

I was able to run the join once but after that, as a part of a script, I ran the kubeadmn reset followed by init and join few times where this has started showing up.

Not able to figure out what or where I am doing a mistake.

My main intent is to put all the commands in the form of a shell script (on master node) so that it can be run on a cluster to create a network.

Answer:

I had the encountered the following issue after node was rebooted:

[kubelet] Creating a ConfigMap «kubelet-config-1.13» in namespace kube-system with the configuration for the kubelets in the cluster

[patchnode] Uploading the CRI Socket information «/var/run/dockershim.sock» to the Node API object «k8smaster» as an annotation

[kubelet-check] Initial timeout of 40s passed.

error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

Steps to get rid of this issue:

  1. Check the hostname again, after reboot it might have changed.

    sudo vi /etc/hostname

    sudo vi /etc/hosts

  2. Perform the following clean-up actions

    Code:

    sudo kubeadm reset

    rm -rf /var/lib/cni/

    sudo rm -rf /var/lib/cni/

    systemctl daemon-reload

    systemctl restart kubelet

    sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

  3. Execute the init action with the special tag as below

    Code:

    sudo kubeadm init —pod-network-cidr=192.168.0.0/16 —apiserver-advertise-address=10.10.10.2 —ignore-preflight-errors=all

    (where 10.10.10.2 is the IP of master node and 192.168.0.0/16 is the private subnet assigned for Pods)

I had the encountered the following issue after node was rebooted:

[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8smaster" as an annotation[kubelet-check] Initial timeout of 40s passed.error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

Steps to get rid of this issue:

  1. Check the hostname again, after reboot it might have changed.

    sudo vi /etc/hostname sudo vi /etc/hosts
  2. Perform the following clean-up actions

    Code:

    sudo kubeadm resetrm -rf /var/lib/cni/sudo rm -rf /var/lib/cni/systemctl daemon-reloadsystemctl restart kubeletsudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
  3. Execute the init action with the special tag as below

    Code:

    sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.10.10.2 --ignore-preflight-errors=all    

    (where 10.10.10.2 is the IP of master node and 192.168.0.0/16 is the private subnet assigned for Pods)


I am using the following kubeadmin config with external etcd setup for HA kubernetes setup following https://kubernetes.io/docs/setup/independent/high-availability/#external-etcd-nodes in bare metal server with centos7.

etcd version — v3.2.26

kind: ClusterConfiguration
kubernetesVersion: v1.13.1
apiServer:
  certSANs:
  - "k8-master01.loc.prov.domain.tld"
controlPlaneEndpoint: "k8-master01.loc.prov.domain.tld:8080"
etcd:
    external:
        endpoints:
        - https://k8-master01.loc.prov.domain.tld:2379
        - https://k8-master02.loc.prov.domain.tld:2379
        - https://k8-master03.loc.prov.domain.tld:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

However init keeps failing at following step:


I0204 15:04:24.985393  142883 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8-master01.loc.prov.domain.tld" as an annotation
I0204 15:04:25.485719  142883 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.1 (linux/amd64) kubernetes/eec55b9" -H "Accept: application/json, */*" 'https://k8-master01.loc.prov.domain.tld:8080/api/v1/nodes/k8-master01.loc.prov.domain.tld'
I0204 15:04:25.488810  142883 round_trippers.go:438] GET https://k8-master01.loc.prov.domain.tld:8080/api/v1/nodes/k8-master01.loc.prov.domain.tld 404 Not Found in 3 milliseconds

It keeps retrying and then eventually times out.


error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

( Hostname has been fuzzed in above log )

Any suggestions as to how we can proceed?

Понравилась статья? Поделить с друзьями:
  • Error expected nested name specifier before namespace
  • Error expected linebreaks to be lf but found crlf linebreak style
  • Error expected initializer before void
  • Error expected initializer before using
  • Error expected initializer before token