Network plugin returns error cni plugin not initialized

I'm using kind to run a test kubernetes cluster on my local Macbook. I found one of the nodes with status NotReady: $ kind get clusters ...

I’m using kind to run a test kubernetes cluster on my local Macbook.

I found one of the nodes with status NotReady:

$ kind get clusters                                                                                                                                                                 
mc

$ kubernetes get nodes
NAME                STATUS     ROLES    AGE     VERSION
mc-control-plane    Ready      master   4h42m   v1.18.2
mc-control-plane2   Ready      master   4h41m   v1.18.2
mc-control-plane3   Ready      master   4h40m   v1.18.2
mc-worker           NotReady   <none>   4h40m   v1.18.2
mc-worker2          Ready      <none>   4h40m   v1.18.2
mc-worker3          Ready      <none>   4h40m   v1.18.2

The only interesting thing in kubectl describe node mc-worker is that the CNI plugin not initialized:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
message:Network plugin returns error: cni plugin not initialized

I have 2 similar clusters and this only occurs on this cluster.

Since kind uses the local Docker daemon to run these nodes as containers, I have already tried to restart the container (should be the equivalent of rebooting the node).

I have considered deleting and recreating the cluster, but there ought to be a way to solve this without recreating the cluster.

Here are the versions that I’m running:

$ kind version                                                                                                                                                                     
kind v0.8.1 go1.14.4 darwin/amd64

$ kubectl version                                                                                                                                                  
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

How do you resolve this issue?

We have a cluster with 4 worker nodes and 1 master, and the flannel CNI installed. 1 kube-flannel-ds-xxxx pod running on every node.

They used to run fine, but 1 node suddenly entered NotReady state and does not come out of it anymore.

journalctl -u kubelet -f on the node constanly emits «cni plugin not initialized»

Jul 25 14:44:05 ubdock09 kubelet[13076]: E0725 14:44:05.916280   13076 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

Deleting the flannel pod makes a new one start up but the pluin keeps being uninitialized.
What can we do or check to fix this?

asked Jul 25, 2022 at 12:52

Serve Laurijssen's user avatar

Found the cause in journalctl of containerd.

Jul 25 15:10:36 ubdock09 containerd[23164]: time="2022-07-25T15:10:36.480398235+02:00" level=error msg="failed to reload cni configuration after receiving fs change event("/etc/cni/net.d/.10-flannel.conf.swp": REMOVE)" error="cni config load failed: failed to load CNI config file /etc/cni/net.d/10-flannel.conf: error parsing configuration: missing 'type': invalid cni config: failed to load cni config"

The Ready machines did not have /etc/cni/net.d/10-flannel.conf so I just removed the /etc/cni/net.d directory and the network device cni0 that was created by the container network interface.

id@machine:/# ip -4 addr show
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.244.11.1/24 brd 10.244.11.255 scope global cni0
       valid_lft forever preferred_lft forever

ip link delete cni0

Then restarted containerd and the flannel pod. Now the node is ready and cni0 is recreated.

answered Jul 25, 2022 at 13:19

Serve Laurijssen's user avatar

2

I’m using kind to run a test kubernetes cluster on my local Macbook.

I found one of the nodes with status NotReady:

$ kind get clusters                                                                                                                                                                 
mc

$ kubernetes get nodes
NAME                STATUS     ROLES    AGE     VERSION
mc-control-plane    Ready      master   4h42m   v1.18.2
mc-control-plane2   Ready      master   4h41m   v1.18.2
mc-control-plane3   Ready      master   4h40m   v1.18.2
mc-worker           NotReady   <none>   4h40m   v1.18.2
mc-worker2          Ready      <none>   4h40m   v1.18.2
mc-worker3          Ready      <none>   4h40m   v1.18.2

The only interesting thing in kubectl describe node mc-worker is that the CNI plugin not initialized:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Tue, 11 Aug 2020 16:55:44 -0700   Tue, 11 Aug 2020 12:10:16 -0700   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
message:Network plugin returns error: cni plugin not initialized

I have 2 similar clusters and this only occurs on this cluster.

Since kind uses the local Docker daemon to run these nodes as containers, I have already tried to restart the container (should be the equivalent of rebooting the node).

I have considered deleting and recreating the cluster, but there ought to be a way to solve this without recreating the cluster.

Here are the versions that I’m running:

$ kind version                                                                                                                                                                     
kind v0.8.1 go1.14.4 darwin/amd64

$ kubectl version                                                                                                                                                  
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

How do you resolve this issue?

Here my instance profile

{
    "InstanceProfile": {
        "Path": "/",
        "InstanceProfileName": "dev20200228131815513700000007",
        "InstanceProfileId": "AIPATOZ752HGI6YJTG4UB",
        "Arn": "arn:aws:iam::XXXXXXXXXXXX:instance-profile/dev20200228131815513700000007",
        "CreateDate": "2020-02-28T13:18:15+00:00",
        "Roles": [
            {
                "Path": "/",
                "RoleName": "dev20200228131814120700000005",
                "RoleId": "AROATOZ752HGG5567G65U",
                "Arn": "arn:aws:iam::XXXXXXXXXXXX:role/dev20200228131814120700000005",
                "CreateDate": "2020-02-28T13:18:14+00:00",
                "AssumeRolePolicyDocument": {
                    "Version": "2012-10-17",
                    "Statement": [
                        {
                            "Sid": "EKSWorkerAssumeRole",
                            "Effect": "Allow",
                            "Principal": {
                                "Service": "ec2.amazonaws.com"
                            },
                            "Action": "sts:AssumeRole"
                        },
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "Federated": "arn:aws:iam::XXXXXXXXXXXX:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/MY_OIDC"
                            },
                            "Action": "sts:AssumeRoleWithWebIdentity",
                            "Condition": {
                                "StringEquals": {
                                    "oidc.eks.us-east-1.amazonaws.com/id/MY_OIDC:aud": "sts.amazonaws.com",
                                    "oidc.eks.us-east-1.amazonaws.com/id/MY_OIDC:sub": "system:serviceaccount:kube-system:aws-node"
                                }
                            }
                        },
                        {
                            "Effect": "Allow",
                            "Principal": {
                                "Federated": "arn:aws:iam::XXXXXXXXXXXX:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/MY_OIDC"
                            },
                            "Action": "sts:AssumeRoleWithWebIdentity",
                            "Condition": {
                                "StringEquals": {
                                    "oidc.eks.us-east-1.amazonaws.com/id/MY_OIDC:aud": "sts.amazonaws.com",
                                    "oidc.eks.us-east-1.amazonaws.com/id/MY_OIDC:sub": "system:serviceaccount:karpenter:karpenter"
                                }
                            }
                        }
                    ]
                }
            }
        ],
        "Tags": [
            {
                "Key": "Project",
                "Value": "my_proj"
            },
            {
                "Key": "Environment",
                "Value": "dev"
            },
            {
                "Key": "Terraformed",
                "Value": "true"
            },
            {
                "Key": "vpc-name",
                "Value": "dev"
            }
        ]
    }
}

Here policies attached to my role

{
    "AttachedPolicies": [
        {
            "PolicyName": "AmazonEKSWorkerNodePolicy",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
        },
        {
            "PolicyName": "AmazonS3FullAccess",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonS3FullAccess"
        },
        {
            "PolicyName": "AmazonEC2ContainerRegistryReadOnly",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
        },
        {
            "PolicyName": "CloudWatchAgentServerPolicy",
            "PolicyArn": "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
        },
        {
            "PolicyName": "AmazonDynamoDBFullAccess",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
        },
        {
            "PolicyName": "AmazonSSMManagedInstanceCore",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
        },
        {
            "PolicyName": "AmazonCognitoPowerUser",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonCognitoPowerUser"
        },
        {
            "PolicyName": "AmazonEKS_CNI_Policy",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
        },
        {
            "PolicyName": "XRayWriteNodes",
            "PolicyArn": "arn:aws:iam::XXXXXXXXXXXX:policy/XRayWriteNodes"
        },
        {
            "PolicyName": "eks-autoscaler-dev",
            "PolicyArn": "arn:aws:iam::XXXXXXXXXXXX:policy/eks-autoscaler-dev"
        },
        {
            "PolicyName": "eks-worker-autoscaling-dev",
            "PolicyArn": "arn:aws:iam::XXXXXXXXXXXX:policy/eks-worker-autoscaling-dev"
        },
        {
            "PolicyName": "KarpenterControllerPolicy-dev",
            "PolicyArn": "arn:aws:iam::XXXXXXXXXXXX:policy/KarpenterControllerPolicy-dev"
        },
        {
            "PolicyName": "AmazonEKSVPCResourceController",
            "PolicyArn": "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
        }
    ]
}

Custom policies are:
eks-autoscaler-dev

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "eksWorkerAutoscalingAll",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeLaunchTemplateVersions",
                "autoscaling:DescribeTags",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeAutoScalingGroups"
            ],
            "Resource": "*"
        },
        {
            "Sid": "eksWorkerAutoscalingOwn",
            "Effect": "Allow",
            "Action": [
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:SetDesiredCapacity"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled": "true",
                    "autoscaling:ResourceTag/kubernetes.io/cluster/dev": "owned"
                }
            }
        }
    ]
}

eks-worker-autoscaling-dev

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "eksWorkerAutoscalingAll",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeLaunchTemplateVersions",
                "autoscaling:DescribeTags",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeAutoScalingGroups"
            ],
            "Resource": "*"
        },
        {
            "Sid": "eksWorkerAutoscalingOwn",
            "Effect": "Allow",
            "Action": [
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "autoscaling:SetDesiredCapacity"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled": "true",
                    "autoscaling:ResourceTag/kubernetes.io/cluster/dev": "owned"
                }
            }
        }
    ]
}

KarpenterControllerPolicy-dev

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "ec2:CreateLaunchTemplate",
                "ec2:CreateFleet",
                "ec2:RunInstances",
                "ec2:CreateTags",
                "iam:PassRole",
                "ec2:TerminateInstances",
                "ec2:DeleteLaunchTemplate",
                "ec2:DescribeLaunchTemplates",
                "ec2:DescribeInstances",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeInstanceTypes",
                "ec2:DescribeInstanceTypeOfferings",
                "ec2:DescribeAvailabilityZones",
                "ssm:GetParameter"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}

XRayWriteNodes

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "xray:PutTraceSegments",
                "xray:PutTelemetryRecords"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

How do I resolve kubelet or CNI plugin issues for Amazon EKS?

Last updated: 2021-11-15

I want to resolve issues with my kubelet or CNI plugin for Amazon Elastic Kubernetes Service (Amazon EKS).

Short description

To assign and run an IP address to the pod on your worker node with your CNI plugin (on the Kubernetes website), you must have the following:

  • AWS Identity and Access Management (IAM) permissions, including a CNI policy attached to your worker node’s IAM role or provided through service account IAM roles
  • An Amazon EKS API server endpoint that can be reached from the worker node
  • Network access to API endpoints for Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Registry (Amazon ECR), and Amazon Simple Storage Service (Amazon S3)
  • Enough available IP addresses in your subnet
  • A kube-proxy that’s running successfully for the aws-node pod to progress into Ready status

Resolution

Verify that the aws-node pod is in Running status on each worker node

To verify that the aws-node pod is in Running status on a worker node, run the following command:

kubectl get pods -n kube-system -l k8s-app=aws-node -o wide

If the command output shows that the RESTARTS count is 0, then the aws-node pod is in Running status. Try the troubleshooting steps in the Verify that your subnet has enough free IP addresses available section.

If the command output shows that the RESTARTS count is greater than 0, then try the following steps:

Verify that the worker node can reach the API server endpoint of your Amazon EKS cluster:

curl -vk https://eks-api-server-endpoint-url

Verify connectivity to your Amazon EKS cluster

1.    Verify that your worker node’s security group settings for Amazon EKS are correctly configured. For more information, see Amazon EKS security group considerations.

2.    Verify that your worker node’s network access control list (ACL) rules for your subnet allow communication with the Amazon EKS API server endpoint.

Important: Allow inbound and outbound traffic on port 443.

3.    Verify that the kube-proxy pod is in Running status on each worker node:

kubectl get pods -n kube-system -l k8s-app=kube-proxy -o wide

4.    Verify that your worker node can access API endpoints for Amazon EC2, Amazon ECR, and Amazon S3.

Note: You can configure these services through public endpoints or AWS PrivateLink.

Verify that your subnet has enough free IP addresses available

To list available IP addresses in each subnet in the Amazon Virtual Private Cloud (Amazon VPC) ID, run the following command:

aws ec2 describe-subnets --filters "Name=vpc-id,Values= VPCID" | jq '.Subnets[] | .SubnetId + "=" + "(.AvailableIpAddressCount)"'

Note: The AvailableIpAddressCount should be greater than 0 for the subnet where the pods are launched.

Check whether your security group limits have been reached

Your pod networking configuration can fail if you reach the limits of your security groups per elastic network interface.

For more information, see Amazon VPC quotas.

Verify that you’re running the latest stable version of the CNI plugin

Check the logs of the VPC CNI plugin on the worker node

If you created a pod and an IP address didn’t get assigned to the container, then you receive the following error:

failed to assign an IP address to container

To check the logs, go to the /var/log/aws-routed-eni/ directory, and then locate the file names plugin.log and ipamd.log.

Verify that your kubelet pulls the docker container images

If your kubelet doesn’t pull the docker container images for the kube-proxy and amazon-k8s-cni containers, then you receive the following error:

network plugin is not ready: cni config uninitialized

Make sure that the EKS API server endpoint can be reached from the worker node.

Verify that the WARM_PREFIX_TARGET value is set correctly

WARM_PREFIX_TARGET must be set to a value greater than or equal to 1. If it’s set to 0, then you receive the following error:

Error: Setting WARM_PREFIX_TARGET = 0 is not supported while WARM_IP_TARGET/MINIMUM_IP_TARGET is not set. 
Please configure either one of the WARM_{PREFIX/IP}_TARGET or MINIMUM_IP_TARGET env variable

Check the reserved space in the subnet

Make sure that you have enough available /28 IP CIDR (16 IPs) blocks in the subnet. All 16 IPs must be contiguous. If you don’t have a /28 range of continuous IPs, then you receive the following error:

To resolve the error, create a new subnet and launch the pods from there. You can also use an Amazon EC2 subnet CIDR reservation to reserve space within a subnet with an assigned prefix. For more information, see Subnet CIDR reservations.


Did this article help?


Do you need billing or technical support?

AWS support for Internet Explorer ends on 07/31/2022. Supported browsers are Chrome, Firefox, Edge, and Safari.
Learn more »

When checking the status of the cp and worker nodes

kubectl get node
NAME     STATUS     ROLES           AGE    VERSION
cp       NotReady   control-plane   135m   v1.24.1
worker   Ready      <none>          50m    v1.24.1

and found cp is NotReady. I then tried a

kubectl describe node cp
Name:               cp
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=cp
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 19 Aug 2022 19:45:19 +0000
Taints:             node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  cp
  AcquireTime:     <unset>
  RenewTime:       Fri, 19 Aug 2022 22:04:04 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 19 Aug 2022 22:02:02 +0000   Fri, 19 Aug 2022 19:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 19 Aug 2022 22:02:02 +0000   Fri, 19 Aug 2022 19:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 19 Aug 2022 22:02:02 +0000   Fri, 19 Aug 2022 19:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 19 Aug 2022 22:02:02 +0000   Fri, 19 Aug 2022 19:45:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
  InternalIP:  10.2.0.5
  Hostname:    cp
Capacity:
  cpu:                2
  ephemeral-storage:  20134592Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8137476Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  18556039957
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8035076Ki
  pods:               110
System Info:
  Machine ID:                 fff78146cb1e357890a1028fae17828d
  System UUID:                fff78146-cb1e-3578-90a1-028fae17828d
  Boot ID:                    a461b440-a502-452d-a6d6-6de511f8f630
  Kernel Version:             5.15.0-1016-gcp
  OS Image:                   Ubuntu 20.04.4 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.7
  Kubelet Version:            v1.24.1
  Kube-Proxy Version:         v1.24.1
PodCIDR:                      192.168.0.0/24
PodCIDRs:                     192.168.0.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                          ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-cp                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         138m
  kube-system                 kube-apiserver-cp             250m (12%)    0 (0%)      0 (0%)           0 (0%)         138m
  kube-system                 kube-controller-manager-cp    200m (10%)    0 (0%)      0 (0%)           0 (0%)         138m
  kube-system                 kube-proxy-mrj6b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         138m
  kube-system                 kube-scheduler-cp             100m (5%)     0 (0%)      0 (0%)           0 (0%)         138m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  0 (0%)
  memory             100Mi (1%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                From             Message
  ----     ------                   ----               ----             -------
  Normal   Starting                 42m                kube-proxy       
  Warning  InvalidDiskCapacity      43m                kubelet          invalid capacity 0 on image filesystem
  Normal   Starting                 43m                kubelet          Starting kubelet.
  Normal   NodeAllocatableEnforced  43m                kubelet          Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  43m (x8 over 43m)  kubelet          Node cp status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    43m (x7 over 43m)  kubelet          Node cp status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     43m (x7 over 43m)  kubelet          Node cp status is now: NodeHasSufficientPID
  Normal   RegisteredNode           42m                node-controller  Node cp event: Registered Node cp in Controller

Where the following error appears to be the main culprit.

container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

Any tips to get the cni plugin initialized? I have validated that

  • SELinux is not installed with sestatus Command 'sestatus' not found
  • swap is off with swapoff -a

invalid capacity 0 on image filesystem seemed like it might be a culprit as well, but I don’t see anything alarming in the following

df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        20G  3.9G   16G  21% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           795M  1.6M  794M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/loop0       56M   56M     0 100% /snap/core18/2538
/dev/loop1       62M   62M     0 100% /snap/core20/1587
/dev/loop2      304M  304M     0 100% /snap/google-cloud-cli/58
/dev/loop3       47M   47M     0 100% /snap/snapd/16292
/dev/loop4       68M   68M     0 100% /snap/lxd/22753
/dev/sda15      105M  5.2M  100M   5% /boot/efi
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/cc8a9699f213ac53638eb5afa78b3d4f26925227b7d690b61e4ea0f212c044c4/shm
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/1252fad3b6c04d24951354fca3aba5c4970bed78b835bcf15b1675891d16aea5/shm
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/8fa97bec0fce31245a2946ef5b605d4611a82e4867cef26ab96024e6be7fe377/shm
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/a6a9d66d452157caf8b3cc7b4ac1b7c95d895f890eb0b4ae1d4cf475aa9d7421/shm
shm              64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/c7fe176d5f0091b8c88446c05e9c8e50b393f33db35057d3879e7eb17037cd83/shm
/dev/loop5       62M   62M     0 100% /snap/core20/1611
/dev/loop6      304M  304M     0 100% /snap/google-cloud-cli/60
tmpfs           795M     0  795M   0% /run/user/1001

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Network nwerror error 0 altstore
  • Network name resolution error
  • Network is unreachable no further information ошибка
  • Network is unreachable no further information minecraft как исправить
  • Network error что это значит

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии