To see the stack trace of this error execute with v 5 or higher

I was trying to install Kubernetes cluster use Kubspray and get an error timed out waiting for the condition on the stet create a token FAILED - RETRYING: Create kubeadm token for joining nodes wit...

I was trying to install Kubernetes cluster use Kubspray and get an error timed out waiting for the condition on the stet create a token

FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (5 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (5 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (5 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (3 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (3 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (3 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (2 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (2 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (2 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (1 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (1 retries left).
FAILED — RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (1 retries left).

TASK [kubernetes/control-plane : Create kubeadm token for joining nodes with 24h expiration (default)] ************************************************************************************************************************************************
fatal: [master2 -> 10.0.4.101]: FAILED! => {«attempts»: 5, «changed»: false, «cmd»: [«/usr/local/bin/kubeadm», «—kubeconfig», «/etc/kubernetes/admin.conf», «token», «create»], «delta»: «0:01:15.149534», «end»: «2021-02-19 18:43:22.419216», «msg»: «non-zero return code», «rc»: 1, «start»: «2021-02-19 18:42:07.269682», «stderr»: «timed out waiting for the conditionnTo see the stack trace of this error execute with —v=5 or higher», «stderr_lines»: [«timed out waiting for the condition», «To see the stack trace of this error execute with —v=5 or higher»], «stdout»: «», «stdout_lines»: []}
fatal: [master1 -> 10.0.4.101]: FAILED! => {«attempts»: 5, «changed»: false, «cmd»: [«/usr/local/bin/kubeadm», «—kubeconfig», «/etc/kubernetes/admin.conf», «token», «create»], «delta»: «0:01:15.177863», «end»: «2021-02-19 18:43:23.093810», «msg»: «non-zero return code», «rc»: 1, «start»: «2021-02-19 18:42:07.915947», «stderr»: «timed out waiting for the conditionnTo see the stack trace of this error execute with —v=5 or higher», «stderr_lines»: [«timed out waiting for the condition», «To see the stack trace of this error execute with —v=5 or higher»], «stdout»: «», «stdout_lines»: []}
fatal: [master3 -> 10.0.4.101]: FAILED! => {«attempts»: 5, «changed»: false, «cmd»: [«/usr/local/bin/kubeadm», «—kubeconfig», «/etc/kubernetes/admin.conf», «token», «create»], «delta»: «0:01:15.169796», «end»: «2021-02-19 18:43:23.085743», «msg»: «non-zero return code», «rc»: 1, «start»: «2021-02-19 18:42:07.915947», «stderr»: «timed out waiting for the conditionnTo see the stack trace of this error execute with —v=5 or higher», «stderr_lines»: [«timed out waiting for the condition», «To see the stack trace of this error execute with —v=5 or higher»], «stdout»: «», «stdout_lines»: []}

Versions

kubeadm version (use kubeadm vernsinon):kubeadm version: &version.Info{Major:»1″, Minor:»20″, GitVersion:»v1.20.2″, GitCommit:»faecb196815e248d3ecfb03c680a4507229c2a56″, GitTreeState:»clean», BuildDate:»2021-01-13T13:25:59Z», GoVersion:»go1.15.5″, Compiler:»gc», Platform:»linux/arm64″}

Environment:

  • Kubernetes version (use kubectl version):Client Version: version.Info{Major:»1″, Minor:»20″, GitVersion:»v1.20.2″, GitCommit:»faecb196815e248d3ecfb03c680a4507229c2a56″, GitTreeState:»clean», BuildDate:»2021-01-13T13:28:09Z», GoVersion:»go1.15.5″, Compiler:»gc», Platform:»linux/arm64″}

  • Cloud provider or hardware configuration: Raspbeery Pi 4 4gb

  • OS (e.g. from /etc/os-release): Ubuntu 20.04.2

  • Kernel (e.g. uname -a):Linux master1 5.4.0-1028-raspi The product_uuid and the hostname should be unique across nodes #31-Ubuntu SMP PREEMPT Wed Jan 20 11:30:45 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

What happened?

Fail install Kubernetes to Raspberry cluster

What you expected to happen?

Time out when create token

«timed out waiting for the condition»

How to reproduce it (as minimally and precisely as possible)?

Raspberry Pi 4 4gb
Use kubespray and try on console

ansible-playbook -i ./test/inventory.ini cluster.yml

Anything else we need to know?

image
image

A, the initialization of the cluster error reporting
1. Error reported.
[WARNING Hostname]: hostname “master1” could not be reached
[WARNING Hostname]: hostname “master1”: lookup master1 on 114.114.114.114:53: no such host,
details:

[[email protected] ~]# kubeadm init --config kubeadm.yaml
W1124 09:40:03.139811   68129 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 09:40:03.333487   68129 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "master1" could not be reached
        [WARNING Hostname]: hostname "master1": lookup master1 on 114.114.114.114:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solution:
delete/etc/kubernetes/manifest
Modify kubedm.yaml. Change the name to the host’s hostname
and then perform initialization.

[[email protected] ~]# ls /etc/kubernetes/
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf
[[email protected] ~]# ls
anaconda-ks.cfg  kubeadm.yaml
[[email protected] ~]# rm -rf  /etc/kubernetes/manifests
[[email protected] ~]# ls /etc/kubernetes/
admin.conf  controller-manager.conf  kubelet.conf  pki  scheduler.conf

2.error message:
WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
details:

[[email protected] ~]# kubeadm init --config kubeadm.yaml
W1124 09:47:13.677697   70122 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 09:47:13.876821   70122 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solution:
during initialization, add: – ignore preflight errors = all
that is:

kubeadm init --config kubeadm.yaml --ignore-preflight-errors=all

3.error:
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with –v=5 or higher
details:

[[email protected] ~]# kubeadm init --config kubeadm.yaml --ignore-preflight-errors=all
W1124 09:54:07.361765   71406 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 09:54:07.480565   71406 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
        [WARNING Port-10250]: Port 10250 is in use
        [WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.510116 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

Solution:
execute:

swapoff -a && kubeadm reset  && systemctl daemon-reload && systemctl restart kubelet  && iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

After execution, it is OK to initialize again.

[[email protected] ~]# kubeadm init --config kubeadm.yaml --ignore-preflight-errors=all                                                                                                  
W1124 10:00:18.648091   74450 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "scheduler"
W1124 10:00:18.760000   74450 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.68.127]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.68.127 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.68.127 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.517451 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.68.127:6443 --token abcdef.0123456789abcdef 
    --discovery-token-ca-cert-hash sha256:31020d84f523a2af6fc4fea38e514af8e5e1943a26312f0515e65075da314b29

Getting error while initializing kubeadm.

$: — sudo kubeadm init

 [init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

I have checked here and here and followed the steps but unable to resolve.

To resolve I did ,
FIRST :

sudo rm /etc/containerd/config.toml
sudo systemctl restart containerd
kubeadm init

SECOND: I have edited config.toml file and changed systemd_cgroup = true

Then I tried

sudo kubeadm init --v=5



I0824 10:32:04.093515   27017 initconfiguration.go:116] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0824 10:32:04.093872   27017 interface.go:432] Looking for default routes with IPv4 addresses
I0824 10:32:04.093890   27017 interface.go:437] Default route transits interface "eth0"
I0824 10:32:04.094018   27017 interface.go:209] Interface eth0 is up
I0824 10:32:04.094084   27017 interface.go:257] Interface "eth0" has 2 addresses :[172.31.37.138/20 fe80::69:d1ff:fea7:79ae/64].
I0824 10:32:04.094113   27017 interface.go:224] Checking addr  172.31.37.138/20.
I0824 10:32:04.094131   27017 interface.go:231] IP found 172.31.37.138
I0824 10:32:04.094147   27017 interface.go:263] Found valid IPv4 address 172.31.37.138 for interface "eth0".
I0824 10:32:04.094162   27017 interface.go:443] Found active IP 172.31.37.138
I0824 10:32:04.094197   27017 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0824 10:32:04.098681   27017 version.go:187] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
I0824 10:32:04.770260   27017 checks.go:568] validating Kubernetes and kubeadm version
I0824 10:32:04.770328   27017 checks.go:168] validating if the firewall is enabled and active
I0824 10:32:04.779958   27017 checks.go:203] validating availability of port 6443
I0824 10:32:04.780157   27017 checks.go:203] validating availability of port 10259
I0824 10:32:04.780197   27017 checks.go:203] validating availability of port 10257
I0824 10:32:04.780232   27017 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0824 10:32:04.780251   27017 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0824 10:32:04.780265   27017 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0824 10:32:04.780278   27017 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0824 10:32:04.780293   27017 checks.go:430] validating if the connectivity type is via proxy or direct
I0824 10:32:04.780317   27017 checks.go:469] validating http connectivity to first IP address in the CIDR
I0824 10:32:04.780341   27017 checks.go:469] validating http connectivity to first IP address in the CIDR
I0824 10:32:04.780353   27017 checks.go:104] validating the container runtime
I0824 10:32:04.794206   27017 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0824 10:32:04.794285   27017 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0824 10:32:04.794384   27017 checks.go:644] validating whether swap is enabled or not
I0824 10:32:04.794436   27017 checks.go:370] validating the presence of executable crictl
I0824 10:32:04.794466   27017 checks.go:370] validating the presence of executable conntrack
I0824 10:32:04.794486   27017 checks.go:370] validating the presence of executable ip
I0824 10:32:04.794506   27017 checks.go:370] validating the presence of executable iptables
I0824 10:32:04.794530   27017 checks.go:370] validating the presence of executable mount
I0824 10:32:04.794552   27017 checks.go:370] validating the presence of executable nsenter
I0824 10:32:04.794571   27017 checks.go:370] validating the presence of executable ebtables
I0824 10:32:04.794591   27017 checks.go:370] validating the presence of executable ethtool
I0824 10:32:04.794608   27017 checks.go:370] validating the presence of executable socat
I0824 10:32:04.794629   27017 checks.go:370] validating the presence of executable tc
I0824 10:32:04.794646   27017 checks.go:370] validating the presence of executable touch
I0824 10:32:04.794666   27017 checks.go:516] running all checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
I0824 10:32:04.808265   27017 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0824 10:32:04.808291   27017 checks.go:610] validating kubelet version
I0824 10:32:04.871023   27017 checks.go:130] validating if the "kubelet" service is enabled and active
I0824 10:32:04.906852   27017 checks.go:203] validating availability of port 10250
I0824 10:32:04.907135   27017 checks.go:203] validating availability of port 2379
I0824 10:32:04.907346   27017 checks.go:203] validating availability of port 2380
I0824 10:32:04.907542   27017 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
error execution phase preflight
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:154
github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1594

Getting error message:

service kubelet status



kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Wed 2022-08-24 14:56:42 UTC; 6s ago
       Docs: https://kubernetes.io/docs/home/
    Process: 2561 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
   Main PID: 2561 (code=exited, status=1/FAILURE)

1. unknown service runtime.v1alpha2.ImageService

Error: pulling image: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService

System configuration

centos 9 / 2GB RAM / 2CPU

Master Node

Same issue on master node.

Command

[root@kube-master-1 ~]# kubeadm config images pull
failed to pull image "registry.k8s.io/kube-apiserver:v1.26.0": output: E0107 14:52:09.997544    4134 remote_image.go:222] "PullImage from image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" image="registry.k8s.io/kube-apiserver:v1.26.0"
time="2023-01-07T14:52:09Z" level=fatal msg="pulling image: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService"
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

✅ Solved

Remove below file:

rm /etc/containerd/config.toml

Try again.

Worker Node

Same issue on worker node while joining to master.

Command

[root@kube-worker-2 ~]# kubeadm join x.x.x.x:6443 --token ga8bqg.01azxe9avjx2n6jr        --discovery-token-ca-cert-hash sha256:d57699d74721094e5f921d48a0f9f895a0d7def7e1977e95ce0027a03e7f7d39
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: E0107 17:46:12.269694   11160 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2023-01-07T17:46:12Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

✅ Solved

Same remove the config.toml file and restart the containerd service.

rm /etc/containerd/config.toml
systemctl restart containerd

In this article we are going to cover How to Install Kubernetes Cluster on Ubuntu 20.04 LTS with kubeadm or any other cloud platform like Amazon EC2, Azure VM, Google Cloud Compute,etc. with preinstalled Ubuntu 20.04 LTS.

Prerequisites

  • 2 or 3 Ubuntu 20.04 LTS System with Minimal Installation
  • Minimum 2 or more CPU, 3 GB RAM.
  • Disable SWAP on All node
  • SSH Access with sudo privileges

Firewall Ports/Inbound Traffic Ports for Kubernetes Cluster

Control-plane node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self

Worker node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All

Disable swap

swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab

Also comment out the reference to swap in /etc/fstab. Start by editing the below file:

sudo nano /etc/fstab

Reboot the system to take effect

sudo reboot

Update the system Packages

sudo apt-get update

#1. Install Docker Container Runtime on All node (Master and Worker Nodes)

Install below packages if not installed

sudo apt-get install -y 
    apt-transport-https 
    ca-certificates 
    curl 
    gnupg 
    lsb-release

Add the Docker official GPG Key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Add the Docker APT repository

echo   "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu 
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the System Packages

sudo apt-get update -y

Install docker community edition and container runtime on both master and worker node

sudo apt-get install docker-ce docker-ce-cli containerd.io -y

Add the Docker Daemon configurations to use systemd as the cgroup driver.

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

Check docker images

docker images

ERROR:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/json: dial unix /var/run/docker.sock: connect: permission denied

Solution:

Add the docker user in group and give permission for docker.sock

sudo usermod -aG docker $USER

Change the docker.sock permission

sudo chmod 666 /var/run/docker.sock

Start the Docker service if not started

sudo systemctl start docker.service

To check the docker service status

sudo systemctl status docker.service

Enable Docker service at startup

sudo systemctl enable docker.service

Restart the Docker service

sudo systemctl restart docker

#2. Add Kubernetes GPG Key on All node

Add Kubernetes GPG key in all node.

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

#3. Add Kubernetes APT Repository on All node

Add Kubernetes apt repository on all node for Ubuntu.

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

update the system packages

sudo apt-get update

#4. Install Kubeadm,Kubelet and Kubectl on All Node

Install kubeadm,kubelet and kubectl using below command.

sudo apt-get install -y kubelet kubeadm kubectl

Hold the packages to being upgrade

sudo apt-mark hold kubelet kubeadm kubectl

How to Install Kubernetes Cluster on Ubuntu 20.04 LTS with kubeadm

#5. Initialize the Master node using kubeadm (on Master Node)

Next initialize the master node using kubeadm.

sudo kubeadm init --pod-network-cidr 10.0.0.0/16

Output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.6.177:6443 --token vr5rat.seyprj6jvw4xy43m 
        --discovery-token-ca-cert-hash sha256:4c9b53eb03744b4cf21c5bdacd712024eb09030561714cc5545838482c8017b3

As above output mentioned copy the token in your notepad, we will need to join worker/slave to master node

Create new ‘.kube’ configuration directory and copy the configuration ‘admin.conf’ from ‘/etc/kubernetes’ directory.

sudo mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

To check kubeadm version.

kubeadm version

To check master node status

kubectl get nodes

#6. Configure Pod Network and Verify Pod namespaces

Install the Weave network plugin to communicate master and worker nodes.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"

Output:

serviceaccount/weave-net created

clusterrole.rbac.authorization.k8s.io/weave-net created

clusterrolebinding.rbac.authorization.k8s.io/weave-net created

role.rbac.authorization.k8s.io/weave-net created

rolebinding.rbac.authorization.k8s.io/weave-net created

daemonset.apps/weave-net created

Check node status

#7. Join Worker Node to the Cluster

Next Join two worker nodes to master.

sudo kubeadm join 172.31.6.177:6443 --token vr5rat.seyprj6jvw4xy43m --discovery-token-ca-cert-hash sha256:4c9b53eb03744b4cf21c5bdacd712024eb09030561714cc5545838482c8017b3

Output:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check the All node status

sudo kubectl get nodes

Output:

Status:

NAME               STATUS   ROLES    AGE     VERSION

ip-172-31-16-180   Ready    master   3m19s   v1.20.5

ip-172-31-16-86    Ready    worker1   6m15s   v1.20.5

ip-172-31-21-34    Ready    worker2   3m23s   v1.20.5

To Verify Pod namespaces

sudo kubectl get pods --all-namespaces

Output:

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE

kube-system   coredns-6955765f44-7sw4r                  1/1     Running   0          6m46s

kube-system   coredns-6955765f44-nwwx5                  1/1     Running   0          6m46s

kube-system   etcd-ip-172-31-16-86                      1/1     Running   0          6m53s

kube-system   kube-apiserver-ip-172-31-16-86            1/1     Running   0          6m53s

kube-system   kube-controller-manager-ip-172-31-16-86   1/1     Running   0          6m53s

kube-system   kube-proxy-b5vht                          1/1     Running   0          4m5s

kube-system   kube-proxy-cm6r4                          1/1     Running   0          4m1s

kube-system   kube-proxy-jxr9z                          1/1     Running   0          6m45s

kube-system   kube-scheduler-ip-172-31-16-86            1/1     Running   0          6m53s

kube-system   weave-net-99tsd                           2/2     Running   0          93s

kube-system   weave-net-bwshk                           2/2     Running   0          93s

kube-system   weave-net-g8rg8                           2/2     Running   0          93s

We have covered Install Kubernetes cluster on Ubuntu.

#8. Deploy Sample Nginx microservice on Kubernetes

Lets create a deployment on master node named “nginx-deploy” using YAML.

sudo nano nginx-deploy.yaml

Deployment YAML file should like below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx-container
        image: nginx:latest
        ports:
        - containerPort: 80

Lets create a pod using kubectl command

kubectl apply -f nginx-deploy.yaml

Output:

deployment.apps/nginx-deployment created

Lets check Pod status

kubectl get pods

To check Pods all information

kubectl describe pods

To check pods IP address and its states

kubectl get pods -o wide

Expose the Nginx deployment using kubernetes nodeport (32001) service

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: nginx-app
spec:
  selector: 
    app: nginx-app
  type: NodePort  
  ports:
    - port: 80
      targetPort: 80
      nodePort: 32001
EOF 

Now access the nginx service by using worked node IP and port 32001

How to Install Kubernetes Cluster on Ubuntu 20.04 LTS with kubeadm 1

To delete pod

kubectl delete pod fosstechnix-web-pod(pod name)

OR

kubectl delete -f fosstechnix-web-pod.yml

Conclusion:

In this article, We have covered How to Install Kubernetes Cluster on Ubuntu 20.04 LTS with kubeadm, Initializing master node, creating pod network,join worker/slave node to master, creating pod using YAML , checking the status of node,pod,namespace and deleting pod.

Troubleshooting:

[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solution:

Reset the kubeadm and join again

sudo kubeadm reset

We have covered How to Install Kubernetes Cluster on Ubuntu 20.04 LTS.

Related Articles:

  • 9 Steps to Setup Kubernetes on AWS using KOPS
  • How to Setup Kubernetes Dashboard
  • How to Install Docker on Windows 10

Reference:

Kubernetes install kubeadm official page

minikube installation process

introduce

minikube

Minikube is a tool that can easily run Kubernetes locally. Minikube runs a single-node Kubernetes cluster on a virtual machine in your laptop so that users can try out Kubernetes or develop them on a daily basis.

kubectl

kubectl is a command line interface for running commands against the Kubernetes cluster.

install

Environmental Science

Installation environment: parallels virtual machine
Operating system: centos 7 minimal

process

Environment Initialization

The commands needed to be executed in initialization are sorted out according to the errors encountered during installation.

# Install docker service
sudo yum install docker

# Enabling docker services
sudo systemctl enable docker.service

# Close the firewall
sudo systemctl stop firewalld

# Turn off memory swap
sudo swapoff -a

# Modify File Driver
sudo vi /lib/systemd/system/docker.service
# Modified to cgroupfs
--exec-opt native.cgroupdriver=cgroupfs
# Reload configuration
systemctl daemon-reload
# Restart docker
systemctl restart docker

# Close selinux
sudo setenforce 0

# file right
sudo chmod -R 777 /etc/kubernetes/addons/

# Start the kubelet service, note that you may need to allow minikube start for the first time before pulling kubelet
systemctl enable kubelet.service

Start minikube

If you are running within a VM, your hypervisor does not allow nested virtualization. You will need to use the None (bare-metal) driver

Because the service itself is already installed on the VM virtual machine, it runs using — vm-driver=none bare machine.

sudo minikube start --vm-driver=none

Open minikube dashboard

The dashboard is a Web-based Kubernetes user interface. You can use dashboards to deploy containerized applications to Kubernetes clusters, troubleshoot containerized applications and manage cluster resources. You can use Dashboard to outline applications running on the cluster and to create or modify individual Kubernetes resources (such as Deployments, Jobs, DaemonSets, etc.). For example, you can use the Deployment Wizard to extend deployment, start rolling updates, restart Pod, or deploy new applications.
The dashboard also provides information about the status of Kubernetes resources in the cluster and any errors that may occur.

If installed directly on macOS, you can use minikube dashboard

$ sudo minikube dashboard

🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
http://127.0.0.1:46727/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

#### Host Browser Access

Because we use centos 7 minimal with only terminal environment, we need to access browsers on the host without executing minikube dashboard. Use the following command to open proxy

sudo kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'

And use the following address to access

http://xxx.xxx.xxx.xxx:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy

IP acquisition

To get the virtual machine ip of xxx. xxx. xxx. xxxx, you can use the following commands.

sudo minikube ip

Reference material

<WEB UI (Dashboard)>

<minikube linux installation>

Install and Set up kubectl

«Running Kubernetes locally through Minikube»

Error Reporting + Solution

problem

sudo minikube start --vm-driver=none
sudo: minikube: No command was found

Solution
Add / usr/local/bin to secure_path and take effect immediately after saving

su root
vi /etc/sudoers
Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin

problem

Unable to start VM: create: precreate: exec: "docker": executable file not found in $PATH

Solution

sudo yum install docker

problem

💣  Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
 output: [init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
    [WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [WARNING FileExisting-socat]: socat not found in system path
    [WARNING Hostname]: hostname "minikube" could not be reached
    [WARNING Hostname]: hostname "minikube": lookup minikube on 10.211.55.1:53: no such host
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Solution

systemctl stop firewalld
swapoff -a
systemctl enable docker.service
systemctl enable kubelet.service

problem

failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

Solution

vi /lib/systemd/system/docker.service

//Modified to cgroupfs
systemctl daemon-reload
systemctl restart docker

problem

❌  Problems detected in kube-addon-manager [13ce287ce3f6]:
    error: Error loading config file "/var/lib/minikube/kubeconfig": open /var/lib/minikube/kubeconfig: permission denied
    error: Error loading config file "/var/lib/minikube/kubeconfig": open /var/lib/minikube/kubeconfig: permission denied
    error: Error loading config file "/var/lib/minikube/kubeconfig": open /var/lib/minikube/kubeconfig: permission denied

Solution

setenforce 0

problem

minikube dashboard
🔌  Enabling dashboard ...

💣  Unable to enable dashboard: [enabling addon deploy/addons/dashboard/dashboard-clusterrole.yaml: error creating file at /etc/kubernetes/addons/dashboard-clusterrole.yaml: open /etc/kubernetes/addons/dashboard-clusterrole.yaml: permission denied]

Solution

sudo chmod -R 777 /etc/kubernetes/addons/

problem

 kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'
Starting to serve on [::]:8001
2019/09/23 18:22:52 http: proxy error: dial tcp [::1]:8080: connect: connection refused
2019/09/23 18:22:53 http: proxy error: dial tcp [::1]:8080: connect: connection refused
2019/09/23 18:22:53 http: proxy error: dial tcp [::1]:8080: connect: connection refused

Solution
kubectl +sudo execution

sudo kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'

Понравилась статья? Поделить с друзьями:
  • To run this application you must install net как исправить ошибку
  • To run this application you must install net escape from tarkov как исправить
  • To run this application you must install net ds4windows как исправить
  • To run this application you must install net core divinity как исправить
  • To reach the target page complete the following steps shown above как исправить