Posting Community Wiki as root cause was mentioned by @David Maze
As was pointed in the comments, your versions are very different.
Kubernetes 1.7 was realesed ~ July 2017, when Kubernetes 1.17 was released in Jan 2020 (almost 2,5 year difference). Another thing is version of Docker
and Minikube
must support kubernetes
version.
As Example, if you would like to run Kubernetes 1.6.3 on latest Minikube
version, error occurs.
minikube v1.7.3 on Ubuntu 16.04
✨ Using the none driver based on user configuration
⚠️ Specified Kubernetes version 1.6.4 is less than the oldest supported version: v1.11.10
💣 Sorry, Kubernetes 1.6.4 is not supported by this release of minikube
Also, there was huge change in apiVersions
between version 1.15 and 1.16. More details can be found here.
In this Stackoverflow thread was explained what is shown in kubectl version
.
The second line («Server Version») contains the apiserver version.
As for example Network Policy API
was introduced in Kubernetes 1.7, so if you would like to use it in 1.6, you will get error as API cannot recognize it.
I’ve reproduced your issue.
minikube:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"dirty", BuildDate:"2017-05-12T10:50:10Z", GoVersion:"go1.7", Compiler:"gc", Platform:"linux/amd64"}
minikube:~$ kubectl get pods
Error from server (NotAcceptable): the server was unable to respond with a content type that the client supports (get pods)
minikube:~$ kubectl get nodes
Error from server (NotAcceptable): the server was unable to respond with a content type that the client supports (get nodes)
minikube:~$ kubectl run nginx --image=nginx
WARNING: New generator "deployment/apps.v1" specified, but it isn't available. Falling back to "deployment/apps.v1beta1".
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: no matches for kind "Deployment" in version "apps/v1"
As I mentioned before, Network Policy
was introduced in 1.7. When you will try apply this config from Official Kubernetes docs, it will show the same error you have.
minikube:~$ kubectl apply -f network.yaml
Error from server (NotFound): the server could not find the requested resource.
Most recommended way is to install newest versions of docker, kubernetes and minikube (security and newest features) based on Docker docs and Kubernetes kubectl docs and Minikube.
Another option is to downgrade all components.
Updated 9/20/19
Problem scenario
You run the kubectl
command. You receive «Error from server (NotFound): the server could not find the requested resource.» How do you resolve this?
Solution
1.a. Run this command: kubectl version | grep Version
Look at the GitVersion values for the client and server. They should match or nearly match. (You do not necessarily want the latest version for the client. It is easier to upgrade the client version than the server version.)
1.b. This is an optional step if you are not sure if the kubectl client version is a very different version from the server version. If the output looks too busy you can try either or both of these commands:
kubectl version | grep Version | awk '{print $5}'
kubectl version | grep Version | awk '{print $4}'
The minor versions (the one in between the decimal points like the «15» in 1.15.4), should be within one number of each other. The kubectl command can work if the difference is more than one, but when the value is six or more (or even two or more), you may get this «Error from server (NotFound): the server could not find the requested resource» message. (For clarity, «very different» means that the minor versions differ by two or more. Some variances greater than this can be tolerated, but do not expect it to work when the difference is more than two.) The error message that you received is likely due to the minor versions of the client and server being too far apart. (If they are the same or within one of each other, this solution will not help you.) The rest of this solution is about downloading a kubectl client binary file that is a version that is closer to the server’s version and using it.
2. Make a copy of the kubectl file (e.g., to your home directory). This way you can rollback to it if you have to back out of this change. Download the kubectl that is consistent with the server version. Replace X.Y.Z with the server’s version (as seen in the output of the command shown in 1.a. above) in the following commands:
cd /tmp/
curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.Y.Z/bin/linux/amd64/kubectl
# An example of the above URL may be https://storage.googleapis.com/kubernetes-release/release/v1.15.5/bin/linux/amd64/kubectl # where 1.15.5 is the version associated with the "kubectl version" output for Server Version.
3. Place this kubectl file where the original kubectl file was (e.g., use sudo mv -i /tmp/kubectl /usr/bin/
).
4. Run this command: sudo chmod 777 /usr/bin/kubectl
To read more about the implications of doing this (as it it could allow other users on the server to run kubectl), see this posting.
5. You are done. Now run the kubectl commands (e.g., kubectl get pods
, kubectl get svc
).
jmeter_master_deploy.yaml.txt
The exact command to reproduce the issue:
kubectl create -f jmeter_master_deploy.yaml -v=8
The full output of the command that failed:
I1008 10:46:16.754622 16488 loader.go:357] Config loaded from file /Users/{user}/.kube/config
I1008 10:46:16.764550 16488 round_trippers.go:414] GET https://192.168.99.100:8443/swagger-2.0.0.pb-v1
I1008 10:46:16.764574 16488 round_trippers.go:421] Request Headers:
I1008 10:46:16.764581 16488 round_trippers.go:424] Accept: application/json, /
I1008 10:46:16.764585 16488 round_trippers.go:424] User-Agent: kubectl/v1.9.2 (darwin/amd64) kubernetes/5fa2db2
I1008 10:46:16.774147 16488 round_trippers.go:439] Response Status: 404 Not Found in 9 milliseconds
I1008 10:46:16.774175 16488 round_trippers.go:442] Response Headers:
I1008 10:46:16.774180 16488 round_trippers.go:445] Cache-Control: no-cache, private
I1008 10:46:16.774184 16488 round_trippers.go:445] Content-Type: application/json
I1008 10:46:16.774188 16488 round_trippers.go:445] Content-Length: 1113
I1008 10:46:16.774191 16488 round_trippers.go:445] Date: Tue, 08 Oct 2019 17:46:16 GMT
I1008 10:46:16.775829 16488 request.go:873] Response Body: {
«paths»: [
«/apis»,
«/apis/»,
«/apis/apiextensions.k8s.io»,
«/apis/apiextensions.k8s.io/v1»,
«/apis/apiextensions.k8s.io/v1beta1»,
«/healthz»,
«/healthz/etcd»,
«/healthz/log»,
«/healthz/ping»,
«/healthz/poststarthook/crd-informer-synced»,
«/healthz/poststarthook/generic-apiserver-start-informers»,
«/healthz/poststarthook/start-apiextensions-controllers»,
«/healthz/poststarthook/start-apiextensions-informers»,
«/livez»,
«/livez/log»,
«/livez/ping»,
«/livez/poststarthook/crd-informer-synced»,
«/livez/poststarthook/generic-apiserver-start-informers»,
«/livez/poststarthook/start-apiextensions-controllers»,
«/livez/poststarthook/start-apiextensions-informers»,
«/metrics»,
«/openapi/v2»,
«/readyz»,
«/readyz/log»,
«/readyz/ping»,
«/readyz/poststarthook/crd-informer-synced»,
«/readyz/poststarthook/generic-apiserver-start-informers»,
«/readyz/poststarthook/start-apiextensions-controllers»,
«/readyz/p [truncated 89 chars]
I1008 10:46:16.776569 16488 helpers.go:201] server response object: [{
«metadata»: {},
«status»: «Failure»,
«message»: «the server could not find the requested resource»,
«reason»: «NotFound»,
«details»: {
«causes»: [
{
«reason»: «UnexpectedServerResponse»,
«message»: «unknown»
}
]
},
«code»: 404
}]
F1008 10:46:16.776591 16488 helpers.go:119] Error from server (NotFound): the server could not find the requested resource
The output of the minikube logs
command:
==> Docker <==
— Logs begin at Tue 2019-10-08 16:45:16 UTC, end at Tue 2019-10-08 17:53:27 UTC. —
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.306735384Z» level=info msg=»ClientConn switching balancer to «pick_first»» module=grpc
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.306755508Z» level=info msg=»pickfirstBalancer: HandleSubConnStateChange: 0xc00014eb30, CONNECTING» module=grpc
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.307210762Z» level=info msg=»pickfirstBalancer: HandleSubConnStateChange: 0xc00014eb30, READY» module=grpc
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.322268197Z» level=info msg=»Graph migration to content-addressability took 0.00 seconds»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.322613202Z» level=warning msg=»Your kernel does not support cgroup blkio weight»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.322667866Z» level=warning msg=»Your kernel does not support cgroup blkio weight_device»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.322705424Z» level=warning msg=»Your kernel does not support cgroup blkio throttle.read_bps_device»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.322741186Z» level=warning msg=»Your kernel does not support cgroup blkio throttle.write_bps_device»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.322776380Z» level=warning msg=»Your kernel does not support cgroup blkio throttle.read_iops_device»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.322812510Z» level=warning msg=»Your kernel does not support cgroup blkio throttle.write_iops_device»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.323190934Z» level=info msg=»Loading containers: start.»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.383650769Z» level=info msg=»Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option —bip can be used to set a preferred IP address»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.475845830Z» level=info msg=»Loading containers: done.»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.485179528Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.485633035Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.493291254Z» level=info msg=»Docker daemon» commit=039a7df9ba graphdriver(s)=overlay2 version=18.09.9
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.493454154Z» level=info msg=»Daemon has completed initialization»
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.502837918Z» level=info msg=»API listen on /var/run/docker.sock»
Oct 08 16:45:30 minikube systemd[1]: Started Docker Application Container Engine.
Oct 08 16:45:30 minikube dockerd[2394]: time=»2019-10-08T16:45:30.503951266Z» level=info msg=»API listen on [::]:2376″
Oct 08 16:46:00 minikube dockerd[2394]: time=»2019-10-08T16:46:00.656357805Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:00 minikube dockerd[2394]: time=»2019-10-08T16:46:00.657209739Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:00 minikube dockerd[2394]: time=»2019-10-08T16:46:00.735808330Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:00 minikube dockerd[2394]: time=»2019-10-08T16:46:00.736308386Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:00 minikube dockerd[2394]: time=»2019-10-08T16:46:00.791137100Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:00 minikube dockerd[2394]: time=»2019-10-08T16:46:00.791486976Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:01 minikube dockerd[2394]: time=»2019-10-08T16:46:01.064001360Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:01 minikube dockerd[2394]: time=»2019-10-08T16:46:01.064364125Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.543451485Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.544026213Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.554018580Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.554449098Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.584541160Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.585117055Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.595521300Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:03 minikube dockerd[2394]: time=»2019-10-08T16:46:03.596237488Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:08 minikube dockerd[2394]: time=»2019-10-08T16:46:08.689352968Z» level=warning msg=»failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87fnspec: 1.0.1-devn»
Oct 08 16:46:08 minikube dockerd[2394]: time=»2019-10-08T16:46:08.690143407Z» level=warning msg=»failed to retrieve docker-init version: exec: «docker-init»: executable file not found in $PATH»
Oct 08 16:46:09 minikube dockerd[2394]: time=»2019-10-08T16:46:09.762599933Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/026ff637c037f7fb3afce0a796fe71d0bda215bab4a10e7aa2832ac82b32a74d/shim.sock» debug=false pid=3446
Oct 08 16:46:09 minikube dockerd[2394]: time=»2019-10-08T16:46:09.769440113Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/83070cc1cfc0c8e7331ce7f1eaef32bbffdefcdc0eae39d86498c8ff0c4d1cc4/shim.sock» debug=false pid=3450
Oct 08 16:46:09 minikube dockerd[2394]: time=»2019-10-08T16:46:09.770204660Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/4ff838d36ff997017e23c4a2a8e24c04843e45ae147c352e94b44905976aaf8c/shim.sock» debug=false pid=3452
Oct 08 16:46:09 minikube dockerd[2394]: time=»2019-10-08T16:46:09.772024743Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/ebdeca0e431efad65539f10904f129b10485e59efaa28f52cc9c3e1e119bd728/shim.sock» debug=false pid=3464
Oct 08 16:46:09 minikube dockerd[2394]: time=»2019-10-08T16:46:09.776583034Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/d13434de4d0171ddb710fab8f59a34b84ef30e326ddca49a5afb6bf1e9411a2e/shim.sock» debug=false pid=3479
Oct 08 16:46:10 minikube dockerd[2394]: time=»2019-10-08T16:46:10.106742370Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/dded816611e2d027c5b0ec20885f23173b9d901647ff9fe9f4c1dd967889c539/shim.sock» debug=false pid=3660
Oct 08 16:46:10 minikube dockerd[2394]: time=»2019-10-08T16:46:10.247787079Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/a403cf02cd19aab7fe318dc134fa713d31eb9e06484149836249372919e906bd/shim.sock» debug=false pid=3698
Oct 08 16:46:10 minikube dockerd[2394]: time=»2019-10-08T16:46:10.266980310Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/929a2f64f9338ba58dcfe67353db1a90861d1a86d49eca740ff85f541eec44e2/shim.sock» debug=false pid=3707
Oct 08 16:46:10 minikube dockerd[2394]: time=»2019-10-08T16:46:10.275428163Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/002a6025ad2c03b611453af7dade79a487aa16589a9fcd8eaad5aeb28c126abb/shim.sock» debug=false pid=3712
Oct 08 16:46:10 minikube dockerd[2394]: time=»2019-10-08T16:46:10.321558196Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/3500de1902f11279819053a6cab5c266c68700c0d520e05cd2e1f5d26fea8c39/shim.sock» debug=false pid=3752
Oct 08 16:46:25 minikube dockerd[2394]: time=»2019-10-08T16:46:25.734969690Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/108551006dd347b4c28bd4b11b1e02c3aa226694f23229ae4475ac8f82f0e0f7/shim.sock» debug=false pid=4102
Oct 08 16:46:25 minikube dockerd[2394]: time=»2019-10-08T16:46:25.931261350Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/a10d73b19e2a78ee31a7638f949596fd86202319a6ac70862b73babdba9134d6/shim.sock» debug=false pid=4159
Oct 08 16:46:25 minikube dockerd[2394]: time=»2019-10-08T16:46:25.977317114Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/fc18178e1ae497df837544de13e64e831b7f649fc3d22dbee84bae0dfd797ef4/shim.sock» debug=false pid=4178
Oct 08 16:46:26 minikube dockerd[2394]: time=»2019-10-08T16:46:26.327682996Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/25d35ae9113f76031abb753037e3b0f5cd689da9d3ed9251e3c208668164ac82/shim.sock» debug=false pid=4278
Oct 08 16:46:26 minikube dockerd[2394]: time=»2019-10-08T16:46:26.554727604Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/032f4de87359790e64a979b644ae1c468da9efc98663a17d244e821716adffcf/shim.sock» debug=false pid=4331
Oct 08 16:46:26 minikube dockerd[2394]: time=»2019-10-08T16:46:26.759560655Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/c88357bfa0dd13bf0f9174aed84a474aa78b926a8eea2bfce75db74b4eb5deae/shim.sock» debug=false pid=4415
Oct 08 16:46:27 minikube dockerd[2394]: time=»2019-10-08T16:46:27.007225894Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/96534e45cf389c24d30deadb87c1c5484e5b3c00faf12e47d019d9d3e06b59ca/shim.sock» debug=false pid=4484
Oct 08 16:46:27 minikube dockerd[2394]: time=»2019-10-08T16:46:27.199650502Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/5dd2e790200a6122f1c4e95d1df7343c9b4bd1ebbb883494c9d5d7a1391d2d18/shim.sock» debug=false pid=4554
Oct 08 16:46:31 minikube dockerd[2394]: time=»2019-10-08T16:46:31.509730581Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/9370520c0ba05659f067122968b2c80c0c011c2225d230c7ab7a28a57f618085/shim.sock» debug=false pid=4654
Oct 08 16:46:31 minikube dockerd[2394]: time=»2019-10-08T16:46:31.521464882Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/78d2699651752d223d95d1d61664437517c5fe572dd8c8939f775022164abff2/shim.sock» debug=false pid=4666
Oct 08 16:46:32 minikube dockerd[2394]: time=»2019-10-08T16:46:32.081381535Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/24ae86df755ca1e4fe10d2d350e3f524e50936b30526e1b1761572ee1ce4bbe2/shim.sock» debug=false pid=4784
Oct 08 16:46:39 minikube dockerd[2394]: time=»2019-10-08T16:46:39.048223237Z» level=info msg=»shim containerd-shim started» address=»/containerd-shim/moby/dcb53ae6c39536b46922e76486eea8f83634835d2e8b47869c9bc73d0afda1b9/shim.sock» debug=false pid=4910
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
dcb53ae6c3953 kubernetesui/metrics-scraper@sha256:35fcae4fd9232a541a8cb08f2853117ba7231750b75c2cb3b6a58a2aaa57f878 About an hour ago Running dashboard-metrics-scraper 0 9370520c0ba05
24ae86df755ca 6802d83967b99 About an hour ago Running kubernetes-dashboard 0 78d2699651752
5dd2e790200a6 4689081edb103 About an hour ago Running storage-provisioner 0 96534e45cf389
c88357bfa0dd1 bf261d1579144 About an hour ago Running coredns 0 fc18178e1ae49
032f4de873597 bf261d1579144 About an hour ago Running coredns 0 a10d73b19e2a7
25d35ae9113f7 c21b0c7400f98 About an hour ago Running kube-proxy 0 108551006dd34
3500de1902f11 301ddc62b80b1 About an hour ago Running kube-scheduler 0 83070cc1cfc0c
929a2f64f9338 b2756210eeabf About an hour ago Running etcd 0 ebdeca0e431ef
002a6025ad2c0 06a629a7e51cd About an hour ago Running kube-controller-manager 0 d13434de4d017
a403cf02cd19a bd12a212f9dcb About an hour ago Running kube-addon-manager 0 4ff838d36ff99
dded816611e2d b305571ca60a5 About an hour ago Running kube-apiserver 0 026ff637c037f
==> coredns [032f4de87359] <==
2019-10-08T16:46:31.484Z [INFO] plugin/ready: Still waiting on: «kubernetes»
.:53
2019-10-08T16:46:31.759Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-08T16:46:31.759Z [INFO] CoreDNS-1.6.2
2019-10-08T16:46:31.759Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2019-10-08T16:46:41.483Z [INFO] plugin/ready: Still waiting on: «kubernetes»
2019-10-08T16:46:51.483Z [INFO] plugin/ready: Still waiting on: «kubernetes»
I1008 16:46:56.759401 1 trace.go:82] Trace[1211169901]: «Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch» (started: 2019-10-08 16:46:26.758887348 +0000 UTC m=+0.030390197) (total time: 30.000485896s):
Trace[1211169901]: [30.000485896s] [30.000485896s] END
E1008 16:46:56.759436 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.759436 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.759436 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1008 16:46:56.759591 1 trace.go:82] Trace[1000837879]: «Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch» (started: 2019-10-08 16:46:26.759100126 +0000 UTC m=+0.030602953) (total time: 30.000476828s):
Trace[1000837879]: [30.000476828s] [30.000476828s] END
I1008 16:46:56.759594 1 trace.go:82] Trace[430159868]: «Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch» (started: 2019-10-08 16:46:26.758832696 +0000 UTC m=+0.030335555) (total time: 30.000741159s):
Trace[430159868]: [30.000741159s] [30.000741159s] END
E1008 16:46:56.759601 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+E1008 16:46:56.759436 1 reflector.goincompatible/tools/cache/reflector.go:94: Fail:126] pkg/mod/k8s.io/ced to list *v1.Endpoints: Get https://10.96.0lient-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout https://10.96.0.1:443/api/v1/servi
E1008 16:46:56.759601 ces?limit=500&reso 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: FurceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ailed E1008 16:46:56.759601 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11to list *v1.Endpoints: Get https://1.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoi0.96.0.1:443/api/v1/endpoints?limit=nts: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.759601 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/to0: dial tcp 10.96.0.1:443: i/o timeoutols/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1
E1008 16:46:56.759604 1 reflector.go:126] pkg/mod/k8s.io/clie:443: i/o timeout
nE1008 16:46:56.759604 1 reflector.go:126] pkg/mod/k8s.io/client-go@t-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *vv11.0.0+incompatible/tools/cache/reflector.go:94: Failed1.Namespace: Get to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit= https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersi500&resourceVersion=0: dial tcp 10.96on=0: dial tcp 10.96.0.1:443: i/o timeout
.0.1:443: i/o timeout
E1008 16:46:56.759604 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.759604 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
==> coredns [c88357bfa0dd] <==
2019-10-08T16:46:29.865Z [INFO] plugin/ready: Still waiting on: «kubernetes»
.:53
2019-10-08T16:46:31.926Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2019-10-08T16:46:31.926Z [INFO] CoreDNS-1.6.2
2019-10-08T16:46:31.926Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
2019-10-08T16:46:39.865Z [INFO] plugin/ready: Still waiting on: «kubernetes»
2019-10-08T16:46:49.866Z [INFO] plugin/ready: Still waiting on: «kubernetes»
I1008 16:46:56.926838 1 trace.go:82] Trace[1439536403]: «Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch» (started: 2019-10-08 16:46:26.926238683 +0000 UTC m=+0.027240839) (total time: 30.000556899s):
Trace[1439536403]: [30.000556899s] [30.000556899s] END
E1008 16:46:56.927154 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927396 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927459 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927154 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927154 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927154 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1008 16:46:56.927183 1 trace.go:82] Trace[304945456]: «Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch» (started: 2019-10-08 16:46:26.926558825 +0000 UTC m=+0.027560956) (total time: 30.000608558s):
Trace[304945456]: [30.000608558s] [30.000608558s] END
E1008 16:46:56.927396 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927396 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927396 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I1008 16:46:56.927096 1 trace.go:82] Trace[2094142773]: «Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch» (started: 2019-10-08 16:46:26.926804713 +0000 UTC m=+0.027806847) (total time: 30.000240908s):
Trace[2094142773]: [30.000240908s] [30.000240908s] END
E1008 16:46:56.927459 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927459 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E1008 16:46:56.927459 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
==> dmesg <==
[ +5.000952] hpet1: lost 318 rtc interrupts
[ +5.002053] hpet1: lost 318 rtc interrupts
[ +5.000548] hpet1: lost 318 rtc interrupts
[ +5.001107] hpet1: lost 318 rtc interrupts
[ +5.002405] hpet1: lost 318 rtc interrupts
[ +5.001274] hpet1: lost 318 rtc interrupts
[Oct 8 17:49] hpet1: lost 318 rtc interrupts
[ +5.001492] hpet1: lost 319 rtc interrupts
[ +5.002974] hpet1: lost 318 rtc interrupts
[ +4.999637] hpet1: lost 319 rtc interrupts
[ +5.001276] hpet1: lost 318 rtc interrupts
[ +5.001064] hpet1: lost 318 rtc interrupts
[ +5.001168] hpet1: lost 318 rtc interrupts
[ +5.001931] hpet1: lost 319 rtc interrupts
[ +5.000254] hpet1: lost 318 rtc interrupts
[ +5.000447] hpet1: lost 318 rtc interrupts
[ +5.001475] hpet1: lost 318 rtc interrupts
[ +5.001233] hpet1: lost 318 rtc interrupts
[Oct 8 17:50] hpet1: lost 318 rtc interrupts
[ +5.000661] hpet1: lost 318 rtc interrupts
[ +5.001132] hpet1: lost 319 rtc interrupts
[ +5.000629] hpet1: lost 318 rtc interrupts
[ +5.001521] hpet1: lost 318 rtc interrupts
[ +5.001046] hpet1: lost 318 rtc interrupts
[ +5.000950] hpet1: lost 318 rtc interrupts
[ +5.002324] hpet1: lost 318 rtc interrupts
[ +5.001294] hpet1: lost 318 rtc interrupts
[ +5.000420] hpet1: lost 318 rtc interrupts
[ +5.001284] hpet1: lost 318 rtc interrupts
[ +5.001286] hpet1: lost 318 rtc interrupts
[Oct 8 17:51] hpet1: lost 318 rtc interrupts
[ +5.001150] hpet1: lost 318 rtc interrupts
[ +5.000903] hpet1: lost 318 rtc interrupts
[ +5.001932] hpet1: lost 319 rtc interrupts
[ +5.000321] hpet1: lost 319 rtc interrupts
[ +5.000863] hpet1: lost 318 rtc interrupts
[ +5.000917] hpet1: lost 318 rtc interrupts
[ +5.001339] hpet1: lost 318 rtc interrupts
[ +5.000928] hpet1: lost 318 rtc interrupts
[ +5.001573] hpet1: lost 318 rtc interrupts
[ +5.002289] hpet1: lost 318 rtc interrupts
[ +5.000570] hpet1: lost 318 rtc interrupts
[Oct 8 17:52] hpet1: lost 318 rtc interrupts
[ +5.001168] hpet1: lost 318 rtc interrupts
[ +5.001347] hpet1: lost 318 rtc interrupts
[ +5.000893] hpet1: lost 318 rtc interrupts
[ +5.002044] hpet1: lost 319 rtc interrupts
[ +5.000379] hpet1: lost 318 rtc interrupts
[ +5.001491] hpet1: lost 318 rtc interrupts
[ +5.002148] hpet1: lost 318 rtc interrupts
[ +4.999433] hpet1: lost 318 rtc interrupts
[ +5.000661] hpet1: lost 318 rtc interrupts
[ +5.001440] hpet1: lost 318 rtc interrupts
[ +5.000280] hpet1: lost 318 rtc interrupts
[Oct 8 17:53] hpet1: lost 318 rtc interrupts
[ +4.998944] hpet1: lost 318 rtc interrupts
[ +5.001086] hpet1: lost 318 rtc interrupts
[ +5.000910] hpet1: lost 318 rtc interrupts
[ +5.000927] hpet1: lost 318 rtc interrupts
[ +4.547797] hrtimer: interrupt took 6600009 ns
==> kernel <==
17:53:27 up 1:08, 0 users, load average: 0.97, 0.41, 0.23
Linux minikube 4.15.0 #1 SMP Wed Sep 18 07:44:58 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME=»Buildroot 2018.05.3″
==> kube-addon-manager [a403cf02cd19] <==
INFO: error: no o== Reconciling with addon-bjects passed to apply
error: no objects pamanager label ssed to apply
error: ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
nclusterrolebinding.o objects passrbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployed to apply
ment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-08T17:53:18+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-08T17:53:18+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-08T17:53:23+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2019-10-08T17:53:23+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-10-08T17:53:27+00:00 ==
==> kube-apiserver [dded816611e2] <==
I1008 16:46:12.844273 1 client.go:361] parsed scheme: «endpoint»
I1008 16:46:12.844308 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1008 16:46:12.852952 1 client.go:361] parsed scheme: «endpoint»
I1008 16:46:12.853127 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
W1008 16:46:12.980080 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
W1008 16:46:13.006071 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1008 16:46:13.047576 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1008 16:46:13.054047 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1008 16:46:13.069293 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1008 16:46:13.096159 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1008 16:46:13.096249 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1008 16:46:13.109230 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1008 16:46:13.109303 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1008 16:46:13.110997 1 client.go:361] parsed scheme: «endpoint»
I1008 16:46:13.111111 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1008 16:46:13.146458 1 client.go:361] parsed scheme: «endpoint»
I1008 16:46:13.146546 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I1008 16:46:14.831219 1 secure_serving.go:123] Serving securely on [::]:8443
I1008 16:46:14.831334 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1008 16:46:14.831423 1 autoregister_controller.go:140] Starting autoregister controller
I1008 16:46:14.831443 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1008 16:46:14.831660 1 crd_finalizer.go:274] Starting CRDFinalizer
I1008 16:46:14.832213 1 controller.go:85] Starting OpenAPI controller
I1008 16:46:14.832337 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I1008 16:46:14.832381 1 naming_controller.go:288] Starting NamingConditionController
I1008 16:46:14.832463 1 establishing_controller.go:73] Starting EstablishingController
I1008 16:46:14.832554 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1008 16:46:14.832593 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1008 16:46:14.837122 1 available_controller.go:383] Starting AvailableConditionController
I1008 16:46:14.837165 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1008 16:46:14.837179 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1008 16:46:14.837183 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I1008 16:46:14.831425 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
E1008 16:46:14.842212 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.100, ResourceVersion: 0, AdditionalErrorMsg:
I1008 16:46:14.831432 1 controller.go:81] Starting OpenAPI AggregationController
I1008 16:46:14.932331 1 cache.go:39] Caches are synced for autoregister controller
I1008 16:46:14.939441 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1008 16:46:14.939481 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1008 16:46:14.939492 1 shared_informer.go:204] Caches are synced for crd-autoregister
I1008 16:46:15.831457 1 controller.go:107] OpenAPI AggregationController: Processing item
I1008 16:46:15.831845 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1008 16:46:15.832011 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1008 16:46:15.839314 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1008 16:46:15.845510 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1008 16:46:15.845584 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1008 16:46:17.263310 1 controller.go:606] quota admission added evaluator for: endpoints
I1008 16:46:17.613951 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1008 16:46:17.894210 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1008 16:46:18.178622 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
W1008 16:46:18.229410 1 lease.go:222] Resetting endpoints for master service «kubernetes» to [192.168.99.100]
I1008 16:46:18.487386 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1008 16:46:19.296129 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1008 16:46:19.619098 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1008 16:46:25.285463 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1008 16:46:25.316035 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I1008 16:46:25.339331 1 controller.go:606] quota admission added evaluator for: replicasets.apps
E1008 16:58:40.026005 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E1008 17:14:56.086248 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E1008 17:22:09.130166 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E1008 17:37:27.163034 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
==> kube-controller-manager [002a6025ad2c] <==
I1008 16:46:23.781900 1 controllermanager.go:534] Started «attachdetach»
I1008 16:46:23.782090 1 attach_detach_controller.go:334] Starting attach detach controller
I1008 16:46:23.782214 1 shared_informer.go:197] Waiting for caches to sync for attach detach
I1008 16:46:24.032652 1 controllermanager.go:534] Started «job»
I1008 16:46:24.032915 1 job_controller.go:143] Starting job controller
I1008 16:46:24.033005 1 shared_informer.go:197] Waiting for caches to sync for job
I1008 16:46:24.431485 1 controllermanager.go:534] Started «disruption»
I1008 16:46:24.431623 1 disruption.go:333] Starting disruption controller
I1008 16:46:24.431657 1 shared_informer.go:197] Waiting for caches to sync for disruption
I1008 16:46:24.681817 1 controllermanager.go:534] Started «bootstrapsigner»
W1008 16:46:24.681846 1 controllermanager.go:526] Skipping «nodeipam»
I1008 16:46:24.681884 1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer
E1008 16:46:24.932717 1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1008 16:46:24.933056 1 controllermanager.go:526] Skipping «service»
I1008 16:46:24.933696 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1008 16:46:24.944589 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I1008 16:46:24.971842 1 shared_informer.go:204] Caches are synced for service account
I1008 16:46:24.982453 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I1008 16:46:24.983106 1 shared_informer.go:204] Caches are synced for endpoint
I1008 16:46:24.984623 1 shared_informer.go:204] Caches are synced for PV protection
I1008 16:46:24.985544 1 shared_informer.go:204] Caches are synced for expand
I1008 16:46:25.029006 1 shared_informer.go:204] Caches are synced for stateful set
I1008 16:46:25.032887 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I1008 16:46:25.036330 1 shared_informer.go:204] Caches are synced for PVC protection
I1008 16:46:25.041874 1 shared_informer.go:204] Caches are synced for namespace
I1008 16:46:25.049580 1 shared_informer.go:204] Caches are synced for GC
E1008 16:46:25.065828 1 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io «view»: the object has been modified; please apply your changes to the latest version and try again
I1008 16:46:25.188285 1 shared_informer.go:204] Caches are synced for certificate
W1008 16:46:25.189000 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=»minikube» does not exist
I1008 16:46:25.231937 1 shared_informer.go:204] Caches are synced for persistent volume
I1008 16:46:25.232291 1 shared_informer.go:204] Caches are synced for certificate
I1008 16:46:25.235263 1 shared_informer.go:204] Caches are synced for TTL
I1008 16:46:25.281906 1 shared_informer.go:204] Caches are synced for daemon sets
I1008 16:46:25.284217 1 shared_informer.go:204] Caches are synced for taint
I1008 16:46:25.284544 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W1008 16:46:25.284698 1 node_lifecycle_controller.go:903] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1008 16:46:25.284757 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
I1008 16:46:25.285134 1 taint_manager.go:186] Starting NoExecuteTaintManager
I1008 16:46:25.285512 1 event.go:255] Event(v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»a3142762-c8ec-44e0-bbe5-4b466bea474b», APIVersion:»», ResourceVersion:»», FieldPath:»»}): type: ‘Normal’ reason: ‘RegisteredNode’ Node minikube event: Registered Node minikube in Controller
I1008 16:46:25.285628 1 shared_informer.go:204] Caches are synced for attach detach
I1008 16:46:25.308246 1 event.go:255] Event(v1.ObjectReference{Kind:»DaemonSet», Namespace:»kube-system», Name:»kube-proxy», UID:»c23a3114-9870-4acd-a4c2-66840e91195a», APIVersion:»apps/v1″, ResourceVersion:»212″, FieldPath:»»}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: kube-proxy-hpn5d
I1008 16:46:25.334859 1 shared_informer.go:204] Caches are synced for deployment
I1008 16:46:25.352209 1 event.go:255] Event(v1.ObjectReference{Kind:»Deployment», Namespace:»kube-system», Name:»coredns», UID:»71e837fa-3f4b-4716-81f9-fb82e28a49d2″, APIVersion:»apps/v1″, ResourceVersion:»196″, FieldPath:»»}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled up replica set coredns-5644d7b6d9 to 2
I1008 16:46:25.383513 1 shared_informer.go:204] Caches are synced for ReplicaSet
I1008 16:46:25.390839 1 event.go:255] Event(v1.ObjectReference{Kind:»ReplicaSet», Namespace:»kube-system», Name:»coredns-5644d7b6d9″, UID:»46a0dd0e-2fa7-4585-bd0b-1b6af3db1b82″, APIVersion:»apps/v1″, ResourceVersion:»331″, FieldPath:»»}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: coredns-5644d7b6d9-dthxw
I1008 16:46:25.410499 1 shared_informer.go:204] Caches are synced for HPA
I1008 16:46:25.420012 1 event.go:255] Event(v1.ObjectReference{Kind:»ReplicaSet», Namespace:»kube-system», Name:»coredns-5644d7b6d9″, UID:»46a0dd0e-2fa7-4585-bd0b-1b6af3db1b82″, APIVersion:»apps/v1″, ResourceVersion:»331″, FieldPath:»»}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: coredns-5644d7b6d9-kk68k
I1008 16:46:25.435073 1 shared_informer.go:204] Caches are synced for disruption
I1008 16:46:25.435106 1 disruption.go:341] Sending events to api server.
I1008 16:46:25.436212 1 shared_informer.go:204] Caches are synced for ReplicationController
I1008 16:46:25.533139 1 shared_informer.go:204] Caches are synced for job
I1008 16:46:25.534058 1 shared_informer.go:204] Caches are synced for garbage collector
I1008 16:46:25.538891 1 shared_informer.go:204] Caches are synced for resource quota
I1008 16:46:25.541777 1 shared_informer.go:204] Caches are synced for garbage collector
I1008 16:46:25.541799 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1008 16:46:25.545348 1 shared_informer.go:204] Caches are synced for resource quota
I1008 16:46:31.071452 1 event.go:255] Event(v1.ObjectReference{Kind:»Deployment», Namespace:»kubernetes-dashboard», Name:»dashboard-metrics-scraper», UID:»85a930a7-0874-4550-bd52-d1696ad7612d», APIVersion:»apps/v1″, ResourceVersion:»410″, FieldPath:»»}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled up replica set dashboard-metrics-scraper-76585494d8 to 1
I1008 16:46:31.079792 1 event.go:255] Event(v1.ObjectReference{Kind:»ReplicaSet», Namespace:»kubernetes-dashboard», Name:»dashboard-metrics-scraper-76585494d8″, UID:»157a9320-cf50-45b5-b7c0-28e661307e55″, APIVersion:»apps/v1″, ResourceVersion:»411″, FieldPath:»»}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: dashboard-metrics-scraper-76585494d8-hg6c5
I1008 16:46:31.119948 1 event.go:255] Event(v1.ObjectReference{Kind:»Deployment», Namespace:»kubernetes-dashboard», Name:»kubernetes-dashboard», UID:»3fd8ddcd-bc60-4ec8-8717-da84d29f4da4″, APIVersion:»apps/v1″, ResourceVersion:»417″, FieldPath:»»}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled up replica set kubernetes-dashboard-57f4cb4545 to 1
I1008 16:46:31.124700 1 event.go:255] Event(v1.ObjectReference{Kind:»ReplicaSet», Namespace:»kubernetes-dashboard», Name:»kubernetes-dashboard-57f4cb4545″, UID:»a3451600-d3d1-4103-bd7b-4dfba3f11a58″, APIVersion:»apps/v1″, ResourceVersion:»421″, FieldPath:»»}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: kubernetes-dashboard-57f4cb4545-2qdw2
==> kube-proxy [25d35ae9113f] <==
W1008 16:46:26.891315 1 server_others.go:329] Flag proxy-mode=»» unknown, assuming iptables proxy
I1008 16:46:26.906484 1 node.go:135] Successfully retrieved node IP: 10.0.2.15
I1008 16:46:26.906512 1 server_others.go:149] Using iptables Proxier.
W1008 16:46:26.906856 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1008 16:46:26.907085 1 server.go:529] Version: v1.16.0
I1008 16:46:26.907715 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_max’ to 131072
I1008 16:46:26.907738 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1008 16:46:26.908091 1 conntrack.go:83] Setting conntrack hashsize to 32768
I1008 16:46:26.919507 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_established’ to 86400
I1008 16:46:26.919558 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_close_wait’ to 3600
I1008 16:46:26.920535 1 config.go:131] Starting endpoints config controller
I1008 16:46:26.920580 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I1008 16:46:26.921049 1 config.go:313] Starting service config controller
I1008 16:46:26.921071 1 shared_informer.go:197] Waiting for caches to sync for service config
I1008 16:46:27.022670 1 shared_informer.go:204] Caches are synced for service config
I1008 16:46:27.022717 1 shared_informer.go:204] Caches are synced for endpoints config
==> kube-scheduler [3500de1902f1] <==
I1008 16:46:11.413457 1 serving.go:319] Generated self-signed cert in-memory
W1008 16:46:14.899832 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by ‘kubectl create rolebinding -n kube-system ROLEBINDING_NAME —role=extension-apiserver-authentication-reader —serviceaccount=YOUR_NS:YOUR_SA’
W1008 16:46:14.899960 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps «extension-apiserver-authentication» is forbidden: User «system:kube-scheduler» cannot get resource «configmaps» in API group «» in the namespace «kube-system»
W1008 16:46:14.900066 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
W1008 16:46:14.900129 1 authentication.go:201] To require authentication configuration lookup to succeed, set —authentication-tolerate-lookup-failure=false
I1008 16:46:14.908146 1 server.go:143] Version: v1.16.0
I1008 16:46:14.908394 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W1008 16:46:14.913083 1 authorization.go:47] Authorization is disabled
W1008 16:46:14.914062 1 authentication.go:79] Authentication is disabled
I1008 16:46:14.914189 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I1008 16:46:14.914778 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E1008 16:46:14.959639 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User «system:kube-scheduler» cannot list resource «replicasets» in API group «apps» at the cluster scope
E1008 16:46:14.960044 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User «system:kube-scheduler» cannot list resource «csinodes» in API group «storage.k8s.io» at the cluster scope
E1008 16:46:14.969236 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User «system:kube-scheduler» cannot list resource «poddisruptionbudgets» in API group «policy» at the cluster scope
E1008 16:46:14.969532 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User «system:kube-scheduler» cannot list resource «persistentvolumes» in API group «» at the cluster scope
E1008 16:46:14.969754 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User «system:kube-scheduler» cannot list resource «statefulsets» in API group «apps» at the cluster scope
E1008 16:46:14.969936 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User «system:kube-scheduler» cannot list resource «pods» in API group «» at the cluster scope
E1008 16:46:14.976479 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User «system:kube-scheduler» cannot list resource «replicationcontrollers» in API group «» at the cluster scope
E1008 16:46:14.976532 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User «system:kube-scheduler» cannot list resource «storageclasses» in API group «storage.k8s.io» at the cluster scope
E1008 16:46:14.977285 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User «system:kube-scheduler» cannot list resource «persistentvolumeclaims» in API group «» at the cluster scope
E1008 16:46:14.977472 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User «system:kube-scheduler» cannot list resource «nodes» in API group «» at the cluster scope
E1008 16:46:14.981661 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User «system:kube-scheduler» cannot list resource «services» in API group «» at the cluster scope
E1008 16:46:15.960991 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User «system:kube-scheduler» cannot list resource «replicasets» in API group «apps» at the cluster scope
E1008 16:46:15.962110 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User «system:kube-scheduler» cannot list resource «csinodes» in API group «storage.k8s.io» at the cluster scope
E1008 16:46:15.970504 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User «system:kube-scheduler» cannot list resource «poddisruptionbudgets» in API group «policy» at the cluster scope
E1008 16:46:15.977826 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User «system:kube-scheduler» cannot list resource «persistentvolumes» in API group «» at the cluster scope
E1008 16:46:15.978511 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User «system:kube-scheduler» cannot list resource «statefulsets» in API group «apps» at the cluster scope
E1008 16:46:15.980201 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User «system:kube-scheduler» cannot list resource «pods» in API group «» at the cluster scope
E1008 16:46:15.982680 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User «system:kube-scheduler» cannot list resource «replicationcontrollers» in API group «» at the cluster scope
E1008 16:46:15.983205 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User «system:kube-scheduler» cannot list resource «storageclasses» in API group «storage.k8s.io» at the cluster scope
E1008 16:46:15.984349 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User «system:kube-scheduler» cannot list resource «persistentvolumeclaims» in API group «» at the cluster scope
E1008 16:46:15.986138 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User «system:kube-scheduler» cannot list resource «nodes» in API group «» at the cluster scope
E1008 16:46:15.987960 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User «system:kube-scheduler» cannot list resource «services» in API group «» at the cluster scope
I1008 16:46:17.922909 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler…
I1008 16:46:17.930706 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
— Logs begin at Tue 2019-10-08 16:45:16 UTC, end at Tue 2019-10-08 17:53:27 UTC. —
Oct 08 16:46:13 minikube kubelet[3353]: E1008 16:46:13.608807 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:13 minikube kubelet[3353]: E1008 16:46:13.709476 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:13 minikube kubelet[3353]: E1008 16:46:13.810630 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:13 minikube kubelet[3353]: E1008 16:46:13.910882 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.012825 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.113094 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.213934 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.315499 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.417260 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.519380 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.619978 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.720661 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.820873 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.915579 3353 controller.go:220] failed to get node «minikube» when trying to set owner ref to the node lease: nodes «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.921199 3353 kubelet.go:2267] node «minikube» not found
Oct 08 16:46:14 minikube kubelet[3353]: E1008 16:46:14.973877 3353 controller.go:135] failed to ensure node lease exists, will retry in 3.2s, error: namespaces «kube-node-lease» not found
Oct 08 16:46:15 minikube kubelet[3353]: I1008 16:46:15.021674 3353 reconciler.go:154] Reconciler: start to sync state
Oct 08 16:46:15 minikube kubelet[3353]: I1008 16:46:15.023112 3353 kubelet_node_status.go:75] Successfully registered node minikube
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.361286 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92696c39c», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»Starting», Message:»Starting kubelet.», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342715a39c, ext:5182790951, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342715a39c, ext:5182790951, loc:(*time.Location)(0x797f100)}}, Count:1, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.416945 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aefe3dd», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasSufficientMemory», Message:»Node minikube status is now: NodeHasSufficientMemory», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ec3dd, ext:5255740768, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ec3dd, ext:5255740768, loc:(*time.Location)(0x797f100)}}, Count:1, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.472977 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aeff50f», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasNoDiskPressure», Message:»Node minikube status is now: NodeHasNoDiskPressure», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ed50f, ext:5255745170, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ed50f, ext:5255745170, loc:(*time.Location)(0x797f100)}}, Count:1, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.527414 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aeffece», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasSufficientPID», Message:»Node minikube status is now: NodeHasSufficientPID», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6edece, ext:5255747666, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6edece, ext:5255747666, loc:(*time.Location)(0x797f100)}}, Count:1, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.582116 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92bc43743″, GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeAllocatableEnforced», Message:»Updated Node Allocatable limit across pods», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342c431743, ext:5269655749, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342c431743, ext:5269655749, loc:(*time.Location)(0x797f100)}}, Count:1, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.638978 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aeffece», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasSufficientPID», Message:»Node minikube status is now: NodeHasSufficientPID», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6edece, ext:5255747666, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342da6c388, ext:5292965133, loc:(*time.Location)(0x797f100)}}, Count:2, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.697330 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aefe3dd», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasSufficientMemory», Message:»Node minikube status is now: NodeHasSufficientMemory», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ec3dd, ext:5255740768, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342da68a95, ext:5292950555, loc:(*time.Location)(0x797f100)}}, Count:2, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.756375 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aeff50f», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasNoDiskPressure», Message:»Node minikube status is now: NodeHasNoDiskPressure», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ed50f, ext:5255745170, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342da6b47c, ext:5292961281, loc:(*time.Location)(0x797f100)}}, Count:2, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:16 minikube kubelet[3353]: E1008 16:46:16.961439 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aefe3dd», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasSufficientMemory», Message:»Node minikube status is now: NodeHasSufficientMemory», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ec3dd, ext:5255740768, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342dc0d68e, ext:5294673938, loc:(*time.Location)(0x797f100)}}, Count:3, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:17 minikube kubelet[3353]: E1008 16:46:17.363664 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aeff50f», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasNoDiskPressure», Message:»Node minikube status is now: NodeHasNoDiskPressure», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ed50f, ext:5255745170, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342dc0e70f, ext:5294678163, loc:(*time.Location)(0x797f100)}}, Count:3, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:17 minikube kubelet[3353]: E1008 16:46:17.761205 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aeffece», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasSufficientPID», Message:»Node minikube status is now: NodeHasSufficientPID», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6edece, ext:5255747666, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342dc0f01a, ext:5294680480, loc:(*time.Location)(0x797f100)}}, Count:3, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:18 minikube kubelet[3353]: E1008 16:46:18.170912 3353 event.go:237] Server rejected event ‘&v1.Event{TypeMeta:v1.TypeMeta{Kind:»», APIVersion:»»}, ObjectMeta:v1.ObjectMeta{Name:»minikube.15cbb9c92aefe3dd», GenerateName:»», Namespace:»default», SelfLink:»», UID:»», ResourceVersion:»», Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:»», ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:»Node», Namespace:»», Name:»minikube», UID:»minikube», APIVersion:»», ResourceVersion:»», FieldPath:»»}, Reason:»NodeHasSufficientMemory», Message:»Node minikube status is now: NodeHasSufficientMemory», Source:v1.EventSource{Component:»kubelet», Host:»minikube»}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342b6ec3dd, ext:5255740768, loc:(*time.Location)(0x797f100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf5f4d342de30d56, ext:5296916186, loc:(*time.Location)(0x797f100)}}, Count:4, Type:»Normal», EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:»», Related:(*v1.ObjectReference)(nil), ReportingController:»», ReportingInstance:»»}’: ‘namespaces «default» not found’ (will not retry!)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.471724 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «kube-proxy» (UniqueName: «kubernetes.io/configmap/6d803906-8456-4b90-a9b1-7e16ccbcceea-kube-proxy») pod «kube-proxy-hpn5d» (UID: «6d803906-8456-4b90-a9b1-7e16ccbcceea»)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.472345 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «kube-proxy-token-mb9t4» (UniqueName: «kubernetes.io/secret/6d803906-8456-4b90-a9b1-7e16ccbcceea-kube-proxy-token-mb9t4») pod «kube-proxy-hpn5d» (UID: «6d803906-8456-4b90-a9b1-7e16ccbcceea»)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.472414 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «xtables-lock» (UniqueName: «kubernetes.io/host-path/6d803906-8456-4b90-a9b1-7e16ccbcceea-xtables-lock») pod «kube-proxy-hpn5d» (UID: «6d803906-8456-4b90-a9b1-7e16ccbcceea»)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.472467 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «lib-modules» (UniqueName: «kubernetes.io/host-path/6d803906-8456-4b90-a9b1-7e16ccbcceea-lib-modules») pod «kube-proxy-hpn5d» (UID: «6d803906-8456-4b90-a9b1-7e16ccbcceea»)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.572870 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «coredns-token-6pr4k» (UniqueName: «kubernetes.io/secret/8b18d974-201b-4d12-a870-ffba8ffc26c9-coredns-token-6pr4k») pod «coredns-5644d7b6d9-kk68k» (UID: «8b18d974-201b-4d12-a870-ffba8ffc26c9»)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.573003 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «config-volume» (UniqueName: «kubernetes.io/configmap/d4635a74-c593-442e-a68a-2699326c24cf-config-volume») pod «coredns-5644d7b6d9-dthxw» (UID: «d4635a74-c593-442e-a68a-2699326c24cf»)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.573058 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «coredns-token-6pr4k» (UniqueName: «kubernetes.io/secret/d4635a74-c593-442e-a68a-2699326c24cf-coredns-token-6pr4k») pod «coredns-5644d7b6d9-dthxw» (UID: «d4635a74-c593-442e-a68a-2699326c24cf»)
Oct 08 16:46:25 minikube kubelet[3353]: I1008 16:46:25.573122 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «config-volume» (UniqueName: «kubernetes.io/configmap/8b18d974-201b-4d12-a870-ffba8ffc26c9-config-volume») pod «coredns-5644d7b6d9-kk68k» (UID: «8b18d974-201b-4d12-a870-ffba8ffc26c9»)
Oct 08 16:46:26 minikube kubelet[3353]: W1008 16:46:26.429998 3353 pod_container_deletor.go:75] Container «a10d73b19e2a78ee31a7638f949596fd86202319a6ac70862b73babdba9134d6» not found in pod’s containers
Oct 08 16:46:26 minikube kubelet[3353]: W1008 16:46:26.436025 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-5644d7b6d9-kk68k through plugin: invalid network status for
Oct 08 16:46:26 minikube kubelet[3353]: W1008 16:46:26.681306 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-5644d7b6d9-dthxw through plugin: invalid network status for
Oct 08 16:46:26 minikube kubelet[3353]: I1008 16:46:26.685566 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «storage-provisioner-token-kq52n» (UniqueName: «kubernetes.io/secret/4d123bb0-444e-4e1c-8ec3-201f9af2377c-storage-provisioner-token-kq52n») pod «storage-provisioner» (UID: «4d123bb0-444e-4e1c-8ec3-201f9af2377c»)
Oct 08 16:46:26 minikube kubelet[3353]: I1008 16:46:26.685766 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «tmp» (UniqueName: «kubernetes.io/host-path/4d123bb0-444e-4e1c-8ec3-201f9af2377c-tmp») pod «storage-provisioner» (UID: «4d123bb0-444e-4e1c-8ec3-201f9af2377c»)
Oct 08 16:46:26 minikube kubelet[3353]: W1008 16:46:26.693324 3353 pod_container_deletor.go:75] Container «fc18178e1ae497df837544de13e64e831b7f649fc3d22dbee84bae0dfd797ef4» not found in pod’s containers
Oct 08 16:46:26 minikube kubelet[3353]: W1008 16:46:26.716722 3353 pod_container_deletor.go:75] Container «108551006dd347b4c28bd4b11b1e02c3aa226694f23229ae4475ac8f82f0e0f7» not found in pod’s containers
Oct 08 16:46:27 minikube kubelet[3353]: W1008 16:46:27.753196 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-5644d7b6d9-kk68k through plugin: invalid network status for
Oct 08 16:46:27 minikube kubelet[3353]: W1008 16:46:27.768576 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-5644d7b6d9-dthxw through plugin: invalid network status for
Oct 08 16:46:31 minikube kubelet[3353]: I1008 16:46:31.214509 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «kubernetes-dashboard-token-4fshk» (UniqueName: «kubernetes.io/secret/fb493af4-1c47-4eff-b5ee-29fc7ddddd73-kubernetes-dashboard-token-4fshk») pod «kubernetes-dashboard-57f4cb4545-2qdw2» (UID: «fb493af4-1c47-4eff-b5ee-29fc7ddddd73»)
Oct 08 16:46:31 minikube kubelet[3353]: I1008 16:46:31.214589 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «kubernetes-dashboard-token-4fshk» (UniqueName: «kubernetes.io/secret/210448da-9943-45a0-85ca-8e4da6efcae5-kubernetes-dashboard-token-4fshk») pod «dashboard-metrics-scraper-76585494d8-hg6c5» (UID: «210448da-9943-45a0-85ca-8e4da6efcae5»)
Oct 08 16:46:31 minikube kubelet[3353]: I1008 16:46:31.214614 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «tmp-volume» (UniqueName: «kubernetes.io/empty-dir/fb493af4-1c47-4eff-b5ee-29fc7ddddd73-tmp-volume») pod «kubernetes-dashboard-57f4cb4545-2qdw2» (UID: «fb493af4-1c47-4eff-b5ee-29fc7ddddd73»)
Oct 08 16:46:31 minikube kubelet[3353]: I1008 16:46:31.214631 3353 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume «tmp-volume» (UniqueName: «kubernetes.io/empty-dir/210448da-9943-45a0-85ca-8e4da6efcae5-tmp-volume») pod «dashboard-metrics-scraper-76585494d8-hg6c5» (UID: «210448da-9943-45a0-85ca-8e4da6efcae5»)
Oct 08 16:46:31 minikube kubelet[3353]: W1008 16:46:31.901630 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-hg6c5 through plugin: invalid network status for
Oct 08 16:46:31 minikube kubelet[3353]: W1008 16:46:31.985630 3353 pod_container_deletor.go:75] Container «78d2699651752d223d95d1d61664437517c5fe572dd8c8939f775022164abff2» not found in pod’s containers
Oct 08 16:46:31 minikube kubelet[3353]: W1008 16:46:31.986680 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kubernetes-dashboard/kubernetes-dashboard-57f4cb4545-2qdw2 through plugin: invalid network status for
Oct 08 16:46:32 minikube kubelet[3353]: W1008 16:46:32.004086 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-hg6c5 through plugin: invalid network status for
Oct 08 16:46:32 minikube kubelet[3353]: W1008 16:46:32.006455 3353 pod_container_deletor.go:75] Container «9370520c0ba05659f067122968b2c80c0c011c2225d230c7ab7a28a57f618085» not found in pod’s containers
Oct 08 16:46:33 minikube kubelet[3353]: W1008 16:46:33.016748 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kubernetes-dashboard/kubernetes-dashboard-57f4cb4545-2qdw2 through plugin: invalid network status for
Oct 08 16:46:33 minikube kubelet[3353]: W1008 16:46:33.022606 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-hg6c5 through plugin: invalid network status for
Oct 08 16:46:39 minikube kubelet[3353]: W1008 16:46:39.079905 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-hg6c5 through plugin: invalid network status for
Oct 08 16:46:40 minikube kubelet[3353]: W1008 16:46:40.231463 3353 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-hg6c5 through plugin: invalid network status for
==> kubernetes-dashboard [24ae86df755c] <==
2019/10/08 17:51:49 [2019-10-08T17:51:49Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:51:49 Getting list of namespaces
2019/10/08 17:51:49 [2019-10-08T17:51:49Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:51:54 [2019-10-08T17:51:54Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:51:54 Getting list of namespaces
2019/10/08 17:51:54 [2019-10-08T17:51:54Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:51:59 [2019-10-08T17:51:59Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:51:59 Getting list of namespaces
2019/10/08 17:51:59 [2019-10-08T17:51:59Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:04 [2019-10-08T17:52:04Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:04 Getting list of namespaces
2019/10/08 17:52:04 [2019-10-08T17:52:04Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:09 [2019-10-08T17:52:09Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:09 Getting list of namespaces
2019/10/08 17:52:09 [2019-10-08T17:52:09Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:14 [2019-10-08T17:52:14Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:14 Getting list of namespaces
2019/10/08 17:52:14 [2019-10-08T17:52:14Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:19 [2019-10-08T17:52:19Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:19 Getting list of namespaces
2019/10/08 17:52:19 [2019-10-08T17:52:19Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:24 [2019-10-08T17:52:24Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:24 Getting list of namespaces
2019/10/08 17:52:24 [2019-10-08T17:52:24Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:29 [2019-10-08T17:52:29Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:29 Getting list of namespaces
2019/10/08 17:52:29 [2019-10-08T17:52:29Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:34 [2019-10-08T17:52:34Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:34 Getting list of namespaces
2019/10/08 17:52:34 [2019-10-08T17:52:34Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:39 [2019-10-08T17:52:39Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:39 Getting list of namespaces
2019/10/08 17:52:39 [2019-10-08T17:52:39Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:44 [2019-10-08T17:52:44Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:44 Getting list of namespaces
2019/10/08 17:52:44 [2019-10-08T17:52:44Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:49 [2019-10-08T17:52:49Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:49 Getting list of namespaces
2019/10/08 17:52:49 [2019-10-08T17:52:49Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:54 [2019-10-08T17:52:54Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:54 Getting list of namespaces
2019/10/08 17:52:54 [2019-10-08T17:52:54Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:52:59 [2019-10-08T17:52:59Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:52:59 Getting list of namespaces
2019/10/08 17:52:59 [2019-10-08T17:52:59Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:53:04 [2019-10-08T17:53:04Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:53:04 Getting list of namespaces
2019/10/08 17:53:04 [2019-10-08T17:53:04Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:53:09 [2019-10-08T17:53:09Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:53:09 Getting list of namespaces
2019/10/08 17:53:09 [2019-10-08T17:53:09Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:53:14 [2019-10-08T17:53:14Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:53:14 Getting list of namespaces
2019/10/08 17:53:14 [2019-10-08T17:53:14Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:53:19 [2019-10-08T17:53:19Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:53:19 Getting list of namespaces
2019/10/08 17:53:19 [2019-10-08T17:53:19Z] Outcoming response to 172.17.0.1:45834 with 200 status code
2019/10/08 17:53:24 [2019-10-08T17:53:24Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:45834:
2019/10/08 17:53:24 Getting list of namespaces
2019/10/08 17:53:24 [2019-10-08T17:53:24Z] Outcoming response to 172.17.0.1:45834 with 200 status code
==> storage-provisioner [5dd2e790200a] <==
The operating system version:
MacBook Pro (15-inch, 2018)
macOS Mojave
10.14.6 (18G103)
I’m running kubectl create -f blah.yml
The response I get is:
Error from server (NotFound): the server could not find the requested resource
Is there any way to determine which resource was requested that was not found?
kubectl get ns returns
NAME STATUS AGE
default Active 243d
kube-public Active 243d
kube-system Active 243d
This is not a cron job.
Client version 1.9
Server version 1.6
Sep 19, 2018
in Kubernetes
by
• 8,220 points
•
3,451 views
2 answers to this question.
official Documentation says:
A client should be skewed no more than one minor version from the master, but may lead the master by up to one minor version. For example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes, and should work with v1.2, v1.3, and v1.4 clients
You can either downgrade the client or upgrade the server!
answered
Sep 19, 2018
by
Nilesh
• 7,040 points
Related Questions In Kubernetes
- All categories
-
ChatGPT
(4) -
Apache Kafka
(84) -
Apache Spark
(596) -
Azure
(131) -
Big Data Hadoop
(1,907) -
Blockchain
(1,673) -
C#
(141) -
C++
(271) -
Career Counselling
(1,060) -
Cloud Computing
(3,446) -
Cyber Security & Ethical Hacking
(147) -
Data Analytics
(1,266) -
Database
(855) -
Data Science
(75) -
DevOps & Agile
(3,575) -
Digital Marketing
(111) -
Events & Trending Topics
(28) -
IoT (Internet of Things)
(387) -
Java
(1,247) -
Kotlin
(8) -
Linux Administration
(389) -
Machine Learning
(337) -
MicroStrategy
(6) -
PMP
(423) -
Power BI
(516) -
Python
(3,188) -
RPA
(650) -
SalesForce
(92) -
Selenium
(1,569) -
Software Testing
(56) -
Tableau
(608) -
Talend
(73) -
TypeSript
(124) -
Web Development
(3,002) -
Ask us Anything!
(66) -
Others
(1,938) -
Mobile Development
(263)
Subscribe to our Newsletter, and get personalized recommendations.
Already have an account? Sign in.