Error from server notfound pods not found

Hi, It seems there is a minor bug with Jupyterhub using Kubernetes and Docker. I tested this in AWS, following instructions on Zero to Jupyterhub. Below are the symptoms: User Login to Jupyterhub p...

Hi Yuvi,

Below are the details:

$ kubectl --namespace= logs
error: expected 'logs (POD | TYPE/NAME) [CONTAINER_NAME]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples.

$ kubectl logs --namespace='kube-system'
error: expected 'logs (POD | TYPE/NAME) [CONTAINER_NAME]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples.

$ kubectl logs --namespace="kube-system"
error: expected 'logs (POD | TYPE/NAME) [CONTAINER_NAME]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples.

$ kubectl logs --namespace=kube-system
error: expected 'logs (POD | TYPE/NAME) [CONTAINER_NAME]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples.

So i ran below command line:

$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                                                     READY     STATUS    RESTARTS   AGE
kube-system   calico-etcd-nc6k5                                                        1/1       Running   0          12d
kube-system   calico-node-9s0f3                                                        2/2       Running   0          12d
kube-system   calico-node-hhv6w                                                        2/2       Running   1          12d
kube-system   calico-node-m5slg                                                        2/2       Running   1          12d
kube-system   calico-policy-controller-1727037546-nhr41                                1/1       Running   0          12d
kube-system   etcd-ip-192-168-11-169.ap-southeast-2.compute.internal                      1/1       Running   0          12d
kube-system   hub-deployment-606657460-1l4wf                                           1/1       Running   0          4d
kube-system   jupyter-admin                                                            1/1       Running   0          4d
kube-system   jupyter-user1                                                            1/1       Running   0          4d
kube-system   kube-apiserver-ip-192-168-11-169.ap-southeast-2.compute.internal            1/1       Running   0          12d
kube-system   kube-controller-manager-ip-192-168-11-169.ap-southeast-2.compute.internal   1/1       Running   0          12d
kube-system   kube-dns-2425271678-0xr64                                                3/3       Running   0          12d
kube-system   kube-proxy-1rshl                                                         1/1       Running   0          12d
kube-system   kube-proxy-3ksll                                                         1/1       Running   0          12d
kube-system   kube-proxy-d4bh4                                                         1/1       Running   0          12d
kube-system   kube-scheduler-ip-192-168-11-169.ap-southeast-2.compute.internal            1/1       Running   0          12d
kube-system   kubernetes-dashboard-3313488171-jjtjx                                    1/1       Running   0          12d
kube-system   proxy-deployment-1227971824-k8vbd                                        1/1       Running   0          4d
kube-system   tiller-deploy-1853538654-kc6wl                                           1/1       Running   0          12d

My config.yaml looks as below:

hub:
  output of first execution of 'openssl rand -hex 32'
  cookieSecret: "cookiescret hex number generated"

proxy:
  output of second execution of 'openssl rand -hex 32'
  secretToken: "secure token hex number generate"

singleuser:
  image:
    name: jupyter/all-spark-notebook
    tag: latest

auth:
  dummy:
    password: testpass123

Helm command i am using is as below:

helm install jupyterhub/jupyterhub 
    --version=v0.4 
    --name=class1 
    --namespace=kube-system 
    -f config.yaml

Do i need to change version to --version=0.5 to fix this issue ?

Cheers,
Ash

Issue

I am trying to debug a pod with the status «ImagePullBackOff».
The pod is in the namespace minio-operator, but when I try to to describe the pod, it is apparently not found.

Why does that happen?

[[email protected] ~]$ kubectl get all -n minio-operator
NAME                                  READY    STATUS              RESTARTS    AGE
pod/minio-operator-5dd99dd858-n6fdj   0/1      ImagepullBackoff    0           7d

NAME                             READY.    UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio-operator   0         1            0           7d

NAME                                        DESIRED   CURRENT    READY     AGE
replicaset.apps/minio-operator-5dd99dd858   1         1          0         7d
[[email protected] ~]$ kubectl describe pod minio-operator-5dd99dd858-n6fdj
Error from server (NotFound): pods "minio-operator-5dd99dd858-n6fdj" not found

Error from server (NotFound): pods "minio-operator-5dd99dd858-n6fdj" not found

enter image description here

Solution

You’ve not specified the namespace in your describe pod command.

You did kubectl get all -n minio-operator, which gets all resources in the minio-operator namespace, but your kubectl describe has no namespace, so it’s looking in the default namespace for a pod that isn’t there.

kubectl describe pod -n minio-operator <pod name>

Should work OK.

Most operations in kubernetes are namespaced, so will require the -n <namespace> argument unless you switch namespaces.

Answered By — SiHa

Answer Checked By — Senaida (WPSolving Volunteer)

I’m trying to complete «Lab 4.2. Working with CPU and Memory Constraints» and faced following troubles.

I’ve deployed «hog» successfully:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       hog-775c7c858f-c2nmk                       1/1     Running   0          10s
kube-system   calico-kube-controllers-69496d8b75-knwzg   1/1     Running   1          3d14h
kube-system   calico-node-cnj4n                          1/1     Running   1          3d14h
kube-system   calico-node-cw5wb                          1/1     Running   1          3d14h
kube-system   coredns-f9fd979d6-w77l2                    1/1     Running   1          3d14h
kube-system   coredns-f9fd979d6-wzmmq                    1/1     Running   1          3d14h
kube-system   etcd-k8smaster                             1/1     Running   1          3d14h
kube-system   kube-apiserver-k8smaster                   1/1     Running   1          3d14h
kube-system   kube-controller-manager-k8smaster          1/1     Running   1          3d14h
kube-system   kube-proxy-srth7                           1/1     Running   1          3d14h
kube-system   kube-proxy-xwnhc                           1/1     Running   1          3d14h
kube-system   kube-scheduler-k8smaster                   1/1     Running   1          3d14h

But could not get logs for it:

$ kubectl --namespace default logs hog-775c7c858f-c2nmk
Error from server (NotFound): the server could not find the requested resource ( pods/log hog-775c7c858f-c2nmk)

Could anybody help?

Updated 9/20/19

Problem scenario
You run the kubectl command. You receive «Error from server (NotFound): the server could not find the requested resource.» How do you resolve this?

Solution
1.a. Run this command: kubectl version | grep Version

Look at the GitVersion values for the client and server. They should match or nearly match. (You do not necessarily want the latest version for the client. It is easier to upgrade the client version than the server version.)

1.b. This is an optional step if you are not sure if the kubectl client version is a very different version from the server version. If the output looks too busy you can try either or both of these commands:

kubectl version | grep Version | awk '{print $5}' 
kubectl version | grep Version | awk '{print $4}'

The minor versions (the one in between the decimal points like the «15» in 1.15.4), should be within one number of each other. The kubectl command can work if the difference is more than one, but when the value is six or more (or even two or more), you may get this «Error from server (NotFound): the server could not find the requested resource» message. (For clarity, «very different» means that the minor versions differ by two or more. Some variances greater than this can be tolerated, but do not expect it to work when the difference is more than two.) The error message that you received is likely due to the minor versions of the client and server being too far apart. (If they are the same or within one of each other, this solution will not help you.) The rest of this solution is about downloading a kubectl client binary file that is a version that is closer to the server’s version and using it.

2. Make a copy of the kubectl file (e.g., to your home directory). This way you can rollback to it if you have to back out of this change. Download the kubectl that is consistent with the server version. Replace X.Y.Z with the server’s version (as seen in the output of the command shown in 1.a. above) in the following commands:

cd /tmp/
curl -LO https://storage.googleapis.com/kubernetes-release/release/vX.Y.Z/bin/linux/amd64/kubectl

# An example of the above URL may be https://storage.googleapis.com/kubernetes-release/release/v1.15.5/bin/linux/amd64/kubectl # where 1.15.5 is the version associated with the "kubectl version" output for Server Version.

3. Place this kubectl file where the original kubectl file was (e.g., use sudo mv -i /tmp/kubectl /usr/bin/).

4. Run this command: sudo chmod 777 /usr/bin/kubectl

To read more about the implications of doing this (as it it could allow other users on the server to run kubectl), see this posting.

5. You are done. Now run the kubectl commands (e.g., kubectl get pods, kubectl get svc).


Hey All!

First of all, thank you for working on this great, new deployment option! In my organization we are also trying to go through the installation process but faced a few issues. It would be also really nice if we could have a call sometime, if it is possible.

We are using EKS on AWS, I just have a bit of experience with these technologies so maybe going to have a few basic questions too.

We tried to use different nginx installations: the one with helm in your documentation (classic lb) and two others from the original site (alb,nlb) (https://kubernetes.github.io/ingress-nginx/deploy/#aws). We realized that all these installations are creating internet-facing load balancers which is forbidden in our organization. My knowledge is basic about networking so I would like to know that would it cause any problem or further changes in other parts of your recommended installation if we would use an internal load balancer? 

I also tried to install into my personal aws account where internet-facing load balancers are allowed. I installed a classic load balancer with the recommanded helm commands from here: https://github.com/atlassian-labs/data-center-helm-charts/blob/master/docs/examples/ingress/INGRESS_NGINX.md. Then continued with the installation guide from here: https://github.com/atlassian-labs/data-center-helm-charts/blob/master/docs/INSTALLATION.md. Skipped step 3,4,5 just to have the most basic installation. A pod, service and statefulset was created but the pod stucks in pending status. Is step 3,4,5 required to have a running pod?

Then I tried to deploy the ingress (step 4) too, changed values.yaml but the pod is still in pending status, the nginx controller ‘could not find any active endpoint’. Is it a must have to set the ‘use-forwarded-headers=true’ in the configmap before the nginx controller installation? As it states here: https://github.com/atlassian-labs/data-center-helm-charts/blob/master/docs/CONFIGURATION.md

As I mentioned at the beginning, I have many questions, maybe even really basic ones but thank you for your help in advance, it would be great to have a discussion with you!

Best regards,

Balazs

1 answer

1 accepted

Suggest an answer

People on a hot air balloon lifted by Community discussions

Still have a question?

Get fast answers from people who know.

Was this helpful?

Thanks!

Понравилась статья? Поделить с друзьями:
  • Error from server badrequest container in pod is waiting to start containercreating
  • Error from rasdial на магнитоле
  • Error from midi driver fl studio
  • Error from chokidar error enospc system limit for number of file watchers reached watch
  • Error from chokidar c error ebusy resource busy or locked lstat c dumpstack log tmp