An error occurred while retrieving the requested logs openshift

We are getting "An error occurred while retrieving the requested logs." when trying to view logs for any pod in OCP 4.1 web interface. WebSocket connection to 'wss://console-openshift-console.apps.example.com/api/kubernetes/api/v1/namespaces/openshift-console/pods/console-79b6c7bb87-gt2ck/log?container=console&follow=true&tailLines=1000&x-csrf-token=ESx4l2bhkAyUQ8nx9f0%2FmA3qThlJEI6IOptYX2N%2FSPBDwcQuQ1K91DDjT0I3J99QYF4rogNwgleVtq6FV%2BkL7Q%3D%3D' failed: Error during WebSocket handshake: Unexpected response code: 500 The command line logs, exec, and rsh tools give a remote error $ oc logs console-79b6c7bb87-gt2ck Error from server: Get https://master0.example.com:10250/containerLogs/openshift-console/console-79b6c7bb87-gt2ck/console: remote error: tls: internal error We have pending CSRs in an OpenShift 4 cluster after install The attempt to oc exec ... is failing $ oc exec marketplace-operator-768b99959-9pftm -n openshift-marketplace -- echo foo Error from server: error dialing backend: remote error: tls: internal error $ oc logs marketplace-operator-768b99959-9pftm -n openshift-marketplace Error from server: Get https://master:10250/containerLogs/openshift-marketplace/marketplace-operator-768b99959-9pftm/marketplace-operator: remote error: tls: internal error kube-apiserver container has errors $ sudo crictl ps | grep kube-api 239ec13eeaf4e beaf65fce4dc16947c5bd5d1ca7e16313234c393e8ca1c4251ac9b85094972bb About an hour ago Running kube-apiserver-operator 3 bd197ceb6f882 6f2bdcab072ca beaf65fce4dc16947c5bd5d1ca7e16313234c393e8ca1c4251ac9b85094972bb About an hour ago Running kube-apiserver-cert-syncer-8 1 6938a6ebc2c3d e6b9db2994d07 0d8dcfc307048a0f0400e644fcd1c9929018103b15d0f9b23b4841f1e71937bc About an hour ago Running kube-apiserver-8 1 6938a6ebc2c3d $ sudo crictl logs e6b9db2994d07 ... E0725 17:38:54.707552 1 status.go:64] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"https://master:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-master/kube-apiserver-8", Err:(*net.OpError)(0xc01ec89270)} ...

Issue

  • We are getting «An error occurred while retrieving the requested logs.» when trying to view logs for any pod in OCP 4.1 web interface.
WebSocket connection to 'wss://console-openshift-console.apps.example.com/api/kubernetes/api/v1/namespaces/openshift-console/pods/console-79b6c7bb87-gt2ck/log?container=console&follow=true&tailLines=1000&x-csrf-token=ESx4l2bhkAyUQ8nx9f0%2FmA3qThlJEI6IOptYX2N%2FSPBDwcQuQ1K91DDjT0I3J99QYF4rogNwgleVtq6FV%2BkL7Q%3D%3D' failed: Error during WebSocket handshake: Unexpected response code: 500
  • The command line logs, exec, and rsh tools give a remote error
$ oc logs console-79b6c7bb87-gt2ck

Error from server: Get https://master0.example.com:10250/containerLogs/openshift-console/console-79b6c7bb87-gt2ck/console: remote error: tls: internal error
  • We have pending CSRs in an OpenShift 4 cluster after install

  • The attempt to oc exec ... is failing

$ oc exec marketplace-operator-768b99959-9pftm -n openshift-marketplace -- echo foo
Error from server: error dialing backend: remote error: tls: internal error

$ oc logs marketplace-operator-768b99959-9pftm -n openshift-marketplace
Error from server: Get https://master:10250/containerLogs/openshift-marketplace/marketplace-operator-768b99959-9pftm/marketplace-operator: remote error: tls: internal error
  • kube-apiserver container has errors
$ sudo crictl ps | grep kube-api
239ec13eeaf4e       beaf65fce4dc16947c5bd5d1ca7e16313234c393e8ca1c4251ac9b85094972bb   About an hour ago   Running             kube-apiserver-operator                   3                   bd197ceb6f882
6f2bdcab072ca       beaf65fce4dc16947c5bd5d1ca7e16313234c393e8ca1c4251ac9b85094972bb   About an hour ago   Running             kube-apiserver-cert-syncer-8              1                   6938a6ebc2c3d
e6b9db2994d07       0d8dcfc307048a0f0400e644fcd1c9929018103b15d0f9b23b4841f1e71937bc   About an hour ago   Running             kube-apiserver-8                          1                   6938a6ebc2c3d

$ sudo crictl logs e6b9db2994d07
...
E0725 17:38:54.707552       1 status.go:64] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"https://master:10250/containerLogs/openshift-kube-apiserver/kube-apiserver-master/kube-apiserver-8", Err:(*net.OpError)(0xc01ec89270)}
...

Environment

  • Red Hat OpenShift Container Platform
    • 4.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

Содержание

  1. Logs of a deployed pod not showing up in web console #6757
  2. Comments
  3. An error occurred while retrieving the requested logs openshift
  4. Answered by:
  5. Question
  6. Answers
  7. All replies
  8. openshift-install fails with «Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding» #5341
  9. Comments
  10. Platform:
  11. Anything else we need to know?
  12. Log File
  13. Debug
  14. References
  15. openshift-install gather bootstrap —bootstrap bootstrap.openshift.interop.com —master master-0.openshift.interop.com
  16. openshift-install fails with «Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding» #5341
  17. Comments
  18. Platform:
  19. Anything else we need to know?
  20. Log File
  21. Debug
  22. References
  23. openshift-install gather bootstrap —bootstrap bootstrap.openshift.interop.com —master master-0.openshift.interop.com

Logs of a deployed pod not showing up in web console #6757

Navigating to a deployment and switching over to the «Logs» tap does show a back box with a little text underneath:

«An error occurred loading the log. Reload»

oc get logs works.

Tested with «admin/admin» and the user having access to the logs via the oc command.

Anything I can attach here to make it easier to search for?

The text was updated successfully, but these errors were encountered:

@myfear what phase/status is the deployment in when you are trying to look at the logs? Can you look in the network tab and see if it is failing to open the websocket connection? Chrome’s network tab deals with websockets better than firefox so i recommend debugging there.

Chrome console network log:

imagestreams (subscribe) Object
builds (subscribe) Object <>
WebSocket connection to ‘wss://localhost:8443/oapi/v1/namespaces/myfear/deploymentconfigs/swarm/log?…0485760&version=5&access_token=rY8UqUQo5fdoFYYK6Gklhrr2SW6gwKUcCSsZ5J9LAX4’ failed: Error during WebSocket handshake: Unexpected response code: 204

So this was a successful deployment, the pod for successful deployments is reaped very quickly, and then the logs are no longer available. If you ran oc again right now, I expect you would not get any logs.

We know its a usability issue that can’t be solved well until we have a passthrough API to the ELK stack.

Agree. I have to admit that I was actually looking for the running pod and not specifically the deployment pod. Even if I personally believe, that the usability issue here is huge.
Thanks for the clarification!

That’s really helpful feedback. @spadgett we probably need to do something better to guide users to the deployment’s pods from the deployment page. Also maybe when you go to the deployment log tab for the active deployment we should show an info box, something like «If you want to see the logs for the pods created by this deployment do xyz»

Spoke with @jwforres. I’d like add a dropdown on the Logs tab so that you can see logs from any pod in the deployment in addition to the deployer pod. If we were getting the pod list, it would also help us show a better message when the deployer logs aren’t available.

I was looking at team city’s layout today and I like some of what they do
on the logs page:

It has a solid feel to it

On Fri, Jan 29, 2016 at 2:48 PM, Sam Padgett notifications@github.com
wrote:

Spoke with @jwforres https://github.com/jwforres. I’d like add a
dropdown on the Logs tab so that you can see logs from any pod in the
deployment in addition to the deployer pod. If we were getting the pod
list, it would also help us show a better message when the deployer logs
aren’t available.


Reply to this email directly or view it on GitHub
#6757 (comment).

Источник

An error occurred while retrieving the requested logs openshift

This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.

Answered by:

Question

I am getting the below error when we try the image deployment using boot image,

trying the deployment on a site with only have the DP and i force that DP to assigned to secondary site,

Created a boundary group and added the DP and secondary site to that group and enable site assignment for this group.

0, HRESULT=80004005 (e:qfentssmsclienttasksequencetsmbootstraptsmbootstraputil.cpp,1931)]LOG]!>

m_pTSMediaWizardControl->GetPolicy(), HRESULT=80004005 (e:qfentssmsclienttasksequencetsmbootstraptsmediawelcomepage.cpp,303)]LOG]!>

Answers

They problem has been fixed,

Static boot image works fine, the problem is, there was one task sequence created as required hence all unknown machine image deployment use that task sequence.

After change the purpose from required to available in the task sequence the problem has been fixed.

Thank you all for your update

Do you thing its a problem with boundary or boundary group?

When i try to ping the FQDN of server from the client, it able to resolve and getting reply, so there is no issue with the network.

When i checked another machine in that same location where SCCM client installed and it shows Primary Site as a MP and Secondary Site as a Proxy MP and Primary Site Code.

Just because you can ping a resource not does in any way mean that there is no network issue. It just means that there is a path from the source to the destination.

You said «boot image» above, do you mean «boot media»? If so, when was the boot media created and is it dynamic or static?

Jason | http://blog.configmgrftw.com | @jasonsandys

Please check Mpcontrol.log to see if MP is working fine. You should also check MP_Location.log (records location request and reply activity from clients) and IIS logs on the MP (searching IP address of the client).

Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

Thanks for your reply,

We used Dynamic Boot Media. MP works fine; when checking IIS log on MP, The IP address register on IIS below is the log details

10.129.155.214 GET /SMS_MP/.sms_aut MPKEYINFORMATIONMEDIA 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 4044 361
10.129.155.214 CCM_POST /ccm_system/request — 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 4196 271
10.129.155.214 CCM_POST /ccm_system/request — 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 4199 295
10.129.155.214 CCM_POST /ccm_system/request — 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 317 298
10.129.155.214 CCM_POST /ccm_system/request — 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 169716 2421
10.129.155.214 GET /SMS_MP/.sms_pol F0120001-F010000A-6F6BCC28.4_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 504497 3803
10.129.155.214 GET /SMS_MP/.sms_pol F0120009-F0100023-6F6BCC28.37_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 1019028 3293
10.129.155.214 GET /SMS_MP/.sms_pol F012000F-20100417-6F6BCC28.5_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 658841 11818
10.129.155.214 GET /SMS_MP/.sms_pol F0120013-20100416-6F6BCC28.10_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 831329 5143
10.129.155.214 GET /SMS_MP/.sms_pol F0120017-2010040D-6F6BCC28.5_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 790817 3780
10.129.155.214 GET /SMS_MP/.sms_pol F0120018-F010003F-6F6BCC28.3_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 685025 4407
10.129.155.214 GET /SMS_MP/.sms_pol F0120019-201003F8-6F6BCC28.2_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 777625 4137
10.129.155.214 GET /SMS_MP/.sms_pol F012001A-20100409-6F6BCC28.1_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 829809 5155
10.129.155.214 GET /SMS_MP/.sms_pol DEP-F0120018-CAS00002-6F6BCC28.1_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 35317 642
10.129.155.214 GET /SMS_MP/.sms_pol %7B3376dd3e-bcd5-452f-a991-dbff9cc0373a%7D.9_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 6454 354
10.129.155.214 CCM_POST /ccm_system/request — 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 4504 397
10.129.155.214 GET /SMS_MP/.sms_pol DEP-F0120018-CEN0003C-6F6BCC28.1_00 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 35041 607
10.129.155.214 CCM_POST /ccm_system/request — 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 4555 436
10.129.155.214 GET /SMS_MP/.sms_aut MPLOCATION&ir=10.159.168.206&ip=10.159.168.0 80 — 10.159.168.206 SMS+CCM+5.0+TS — 200 0 0 334 4633

Other sites where connected to this MP works fine, only these remote DP having this issue, when we try to perform the NSLOOKUP of MP server it gives error DNS request times out, Is that a problem?

Источник

openshift-install fails with «Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding» #5341

Platform:

DEBUG Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding
.

Bootstrapping to complete.

Anything else we need to know?

Log File

Debug

References

The text was updated successfully, but these errors were encountered:

First time user. Here is the description:

I am trying to install OpenShift on a single VMware EXSi host as described in https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host.

Had success before with OpenShift v4.5, but having issues when upgrading to v4.8. Matchbox was also upgraded from v0.8 to v0.9.

There is a lot to unpack here, and I have no familiarity with https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host. But at first glance it appears that there may be a problem with the load balancing on your bastion host.

Can you ssh into your bastion host to see if there is connectivity to https://192.168.67.253:6443 from there?

Are you able to ssh into the bootstrap VM from the bastion host? If so, you could run the openshift-install gather bootstrap command from the bastion host to get the bootstrap gather.

I think the failure occurs before the boostrap VM gets to boot.

openshift-install gather bootstrap —bootstrap bootstrap.openshift.interop.com —master master-0.openshift.interop.com

INFO Pulling debug logs from the bootstrap machine
FATAL failed to create SSH client: dial tcp 192.168.67.253:22: connect: no route to host

Attached are the wget, curl, and openssl outputs at the bastion VM, comparing http versus https.

Sorry for the font. Still not used to markup.

OK. What is the serial console output on the bootstrap VM?

Bootstrap VM console attached

OK. So the bootstrap VM failed to ignite. That is not an issue with the OpenShift installer. That is an issue with how you have your VM configured. Here is another issue with some pointers on how to figure out what the issue may be.

Removed blank lines and extra characters from the /var/lib/matchbox/profiles/bootstrap.json pulled from the git.
The bootstrap VM is now trying to boot.
But still not completing.

time=»2021-11-10T20:26:14-05:00″ level=error msg=»Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: an error on the server («») has prevented the request from succeeding (get clusteroperators.config.openshift.io)»
time=»2021-11-10T20:26:14-05:00″ level=info msg=»Use the following commands to gather logs from the cluster»
time=»2021-11-10T20:26:14-05:00″ level=info msg=»openshift-install gather bootstrap —help»
time=»2021-11-10T20:26:14-05:00″ level=error msg=»Bootstrap failed to complete: an error on the server («») has prevented the request from succeeding»
time=»2021-11-10T20:26:14-05:00″ level=error msg=»Failed waiting for Kubernetes API. This error usually happens when there is a problem on the bootstrap host that prevents creating a temporary control plane.»
time=»2021-11-10T20:26:14-05:00″ level=fatal msg=»Bootstrap failed to complete»

The bootstrap VM cannot fetch its OS from the bastion. It looks like the script that you are following is well out of date.
https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host/blob/b88de8b5b646648c9b5623042abd89df6de0fd3a/installBastion.sh#L45-L48
The preceding RHCOS files no longer exist at the URL used.

Actually, I have been using the attached installation script. It is updated.

In an event, I cannot provide support for why your bootstrap VM cannot find the file that it is looking for from your TFTP server.

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale .
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen .

If this issue is safe to close now please do so with /close .

Источник

openshift-install fails with «Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding» #5341

Platform:

DEBUG Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding
.

Bootstrapping to complete.

Anything else we need to know?

Log File

Debug

References

The text was updated successfully, but these errors were encountered:

First time user. Here is the description:

I am trying to install OpenShift on a single VMware EXSi host as described in https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host.

Had success before with OpenShift v4.5, but having issues when upgrading to v4.8. Matchbox was also upgraded from v0.8 to v0.9.

There is a lot to unpack here, and I have no familiarity with https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host. But at first glance it appears that there may be a problem with the load balancing on your bastion host.

Can you ssh into your bastion host to see if there is connectivity to https://192.168.67.253:6443 from there?

Are you able to ssh into the bootstrap VM from the bastion host? If so, you could run the openshift-install gather bootstrap command from the bastion host to get the bootstrap gather.

I think the failure occurs before the boostrap VM gets to boot.

openshift-install gather bootstrap —bootstrap bootstrap.openshift.interop.com —master master-0.openshift.interop.com

INFO Pulling debug logs from the bootstrap machine
FATAL failed to create SSH client: dial tcp 192.168.67.253:22: connect: no route to host

Attached are the wget, curl, and openssl outputs at the bastion VM, comparing http versus https.

Sorry for the font. Still not used to markup.

OK. What is the serial console output on the bootstrap VM?

Bootstrap VM console attached

OK. So the bootstrap VM failed to ignite. That is not an issue with the OpenShift installer. That is an issue with how you have your VM configured. Here is another issue with some pointers on how to figure out what the issue may be.

Removed blank lines and extra characters from the /var/lib/matchbox/profiles/bootstrap.json pulled from the git.
The bootstrap VM is now trying to boot.
But still not completing.

time=»2021-11-10T20:26:14-05:00″ level=error msg=»Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: an error on the server («») has prevented the request from succeeding (get clusteroperators.config.openshift.io)»
time=»2021-11-10T20:26:14-05:00″ level=info msg=»Use the following commands to gather logs from the cluster»
time=»2021-11-10T20:26:14-05:00″ level=info msg=»openshift-install gather bootstrap —help»
time=»2021-11-10T20:26:14-05:00″ level=error msg=»Bootstrap failed to complete: an error on the server («») has prevented the request from succeeding»
time=»2021-11-10T20:26:14-05:00″ level=error msg=»Failed waiting for Kubernetes API. This error usually happens when there is a problem on the bootstrap host that prevents creating a temporary control plane.»
time=»2021-11-10T20:26:14-05:00″ level=fatal msg=»Bootstrap failed to complete»

The bootstrap VM cannot fetch its OS from the bastion. It looks like the script that you are following is well out of date.
https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host/blob/b88de8b5b646648c9b5623042abd89df6de0fd3a/installBastion.sh#L45-L48
The preceding RHCOS files no longer exist at the URL used.

Actually, I have been using the attached installation script. It is updated.

In an event, I cannot provide support for why your bootstrap VM cannot find the file that it is looking for from your TFTP server.

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale .
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen .

If this issue is safe to close now please do so with /close .

Источник

Logs of a deployed pod not showing up in web console #6757

Comments

myfear commented Jan 21, 2016

Navigating to a deployment and switching over to the «Logs» tap does show a back box with a little text underneath:

«An error occurred loading the log. Reload»

oc get logs works.

Tested with «admin/admin» and the user having access to the logs via the oc command.

Anything I can attach here to make it easier to search for?

The text was updated successfully, but these errors were encountered:

deads2k commented Jan 21, 2016

jwforres commented Jan 21, 2016

@myfear what phase/status is the deployment in when you are trying to look at the logs? Can you look in the network tab and see if it is failing to open the websocket connection? Chrome’s network tab deals with websockets better than firefox so i recommend debugging there.

jwforres commented Jan 21, 2016

myfear commented Jan 21, 2016

Chrome console network log:

imagestreams (subscribe) Object
builds (subscribe) Object <>
WebSocket connection to ‘wss://localhost:8443/oapi/v1/namespaces/myfear/deploymentconfigs/swarm/log?…0485760&version=5&access_token=rY8UqUQo5fdoFYYK6Gklhrr2SW6gwKUcCSsZ5J9LAX4’ failed: Error during WebSocket handshake: Unexpected response code: 204

jwforres commented Jan 21, 2016

So this was a successful deployment, the pod for successful deployments is reaped very quickly, and then the logs are no longer available. If you ran oc again right now, I expect you would not get any logs.

We know its a usability issue that can’t be solved well until we have a passthrough API to the ELK stack.

myfear commented Jan 21, 2016

Agree. I have to admit that I was actually looking for the running pod and not specifically the deployment pod. Even if I personally believe, that the usability issue here is huge.
Thanks for the clarification!

jwforres commented Jan 21, 2016

That’s really helpful feedback. @spadgett we probably need to do something better to guide users to the deployment’s pods from the deployment page. Also maybe when you go to the deployment log tab for the active deployment we should show an info box, something like «If you want to see the logs for the pods created by this deployment do xyz»

spadgett commented Jan 29, 2016

Spoke with @jwforres. I’d like add a dropdown on the Logs tab so that you can see logs from any pod in the deployment in addition to the deployer pod. If we were getting the pod list, it would also help us show a better message when the deployer logs aren’t available.

smarterclayton commented Jan 29, 2016

I was looking at team city’s layout today and I like some of what they do
on the logs page:

It has a solid feel to it

On Fri, Jan 29, 2016 at 2:48 PM, Sam Padgett notifications@github.com
wrote:

Spoke with @jwforres https://github.com/jwforres. I’d like add a
dropdown on the Logs tab so that you can see logs from any pod in the
deployment in addition to the deployer pod. If we were getting the pod
list, it would also help us show a better message when the deployer logs
aren’t available.


Reply to this email directly or view it on GitHub
#6757 (comment).

Источник

openshift-install fails with «Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding» #5341

Comments

dalier-ramirez commented Oct 28, 2021

Version

Platform:

What happened?

DEBUG Still waiting for the Kubernetes API: an error on the server («») has prevented the request from succeeding
.

What you expected to happen?

Bootstrapping to complete.

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?

Log File

Debug

References

The text was updated successfully, but these errors were encountered:

dalier-ramirez commented Oct 28, 2021

First time user. Here is the description:

I am trying to install OpenShift on a single VMware EXSi host as described in https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host.

Had success before with OpenShift v4.5, but having issues when upgrading to v4.8. Matchbox was also upgraded from v0.8 to v0.9.

staebler commented Nov 10, 2021

There is a lot to unpack here, and I have no familiarity with https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host. But at first glance it appears that there may be a problem with the load balancing on your bastion host.

Can you ssh into your bastion host to see if there is connectivity to https://192.168.67.253:6443 from there?

Are you able to ssh into the bootstrap VM from the bastion host? If so, you could run the openshift-install gather bootstrap command from the bastion host to get the bootstrap gather.

dalier-ramirez commented Nov 10, 2021

I think the failure occurs before the boostrap VM gets to boot.

openshift-install gather bootstrap —bootstrap bootstrap.openshift.interop.com —master master-0.openshift.interop.com

INFO Pulling debug logs from the bootstrap machine
FATAL failed to create SSH client: dial tcp 192.168.67.253:22: connect: no route to host

Attached are the wget, curl, and openssl outputs at the bastion VM, comparing http versus https.

dalier-ramirez commented Nov 10, 2021

Sorry for the font. Still not used to markup.

staebler commented Nov 10, 2021

OK. What is the serial console output on the bootstrap VM?

dalier-ramirez commented Nov 10, 2021

Bootstrap VM console attached

staebler commented Nov 10, 2021

OK. So the bootstrap VM failed to ignite. That is not an issue with the OpenShift installer. That is an issue with how you have your VM configured. Here is another issue with some pointers on how to figure out what the issue may be.

dalier-ramirez commented Nov 11, 2021

Removed blank lines and extra characters from the /var/lib/matchbox/profiles/bootstrap.json pulled from the git.
The bootstrap VM is now trying to boot.
But still not completing.

dalier-ramirez commented Nov 11, 2021

time=»2021-11-10T20:26:14-05:00″ level=error msg=»Attempted to gather ClusterOperator status after wait failure: listing ClusterOperator objects: an error on the server («») has prevented the request from succeeding (get clusteroperators.config.openshift.io)»
time=»2021-11-10T20:26:14-05:00″ level=info msg=»Use the following commands to gather logs from the cluster»
time=»2021-11-10T20:26:14-05:00″ level=info msg=»openshift-install gather bootstrap —help»
time=»2021-11-10T20:26:14-05:00″ level=error msg=»Bootstrap failed to complete: an error on the server («») has prevented the request from succeeding»
time=»2021-11-10T20:26:14-05:00″ level=error msg=»Failed waiting for Kubernetes API. This error usually happens when there is a problem on the bootstrap host that prevents creating a temporary control plane.»
time=»2021-11-10T20:26:14-05:00″ level=fatal msg=»Bootstrap failed to complete»

staebler commented Nov 11, 2021

The bootstrap VM cannot fetch its OS from the bastion. It looks like the script that you are following is well out of date.
https://github.com/I8C/installing-openshift-on-single-vmware-esxi-host/blob/b88de8b5b646648c9b5623042abd89df6de0fd3a/installBastion.sh#L45-L48
The preceding RHCOS files no longer exist at the URL used.

dalier-ramirez commented Nov 11, 2021

Actually, I have been using the attached installation script. It is updated.

staebler commented Nov 11, 2021

In an event, I cannot provide support for why your bootstrap VM cannot find the file that it is looking for from your TFTP server.

openshift-bot commented Feb 9, 2022

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale .
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen .

If this issue is safe to close now please do so with /close .

openshift-bot commented Mar 11, 2022

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten .
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen .

If this issue is safe to close now please do so with /close .

/lifecycle rotten
/remove-lifecycle stale

openshift-bot commented Apr 10, 2022

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen .
Mark the issue as fresh by commenting /remove-lifecycle rotten .
Exclude this issue from closing again by commenting /lifecycle frozen .

openshift-ci bot commented Apr 10, 2022

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen .
Mark the issue as fresh by commenting /remove-lifecycle rotten .
Exclude this issue from closing again by commenting /lifecycle frozen .

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Источник

Troubleshoot issues on Kubernetes/OpenShift

Find out how to troubleshoot issues you might encounter in the following situations.

General troubleshooting

Debug logs

By default, OneAgent logs are located in /var/log/dynatrace/oneagent .

To debug Dynatrace Operator issues, run

You might also want to check the logs from OneAgent pods deployed through Dynatrace Operator.

Troubleshoot common Dynatrace Operator setup issues using the troubleshoot subcommand

Dynatrace Operator version 0.9.0+

Run the command below to retrieve a basic output on DynaKube status, such as:

Namespace: If the dynatrace namespace exists (name can be overwritten via parameter)

DynaKube:

  • If CustomResourceDefinition exists
  • If CustomResource with the given name exists (name can be overwritten via parameter)
  • If the API URL ends with /api
  • If the secret name is the same as DynaKube (or .spec.tokens if used)
  • If the secret has apiToken and paasToken set
  • If the secret for customPullSecret is defined

Environment: If your environment is reachable from the Dynatrace Operator pod using the same parameters as the Dynatrace Operator binary (such as proxy and certificate).

OneAgent and ActiveGate image: If the registry is accessible; if the image is accessible from the Dynatrace Operator pod using the registry from the environment with (custom) pull secret.

Note: If you use a different DynaKube name, add the —dynakube argument to the command.

Example output if there are no errors for the above-mentioned fields:

Debug configuration and monitoring issues using the Kubernetes Monitoring Statistics extension

  • Troubleshoot your Kubernetes monitoring setup
  • Troubleshoot your Prometheus integration setup
  • Get detailed insights into queries from Dynatrace to the Kubernetes API
  • Receive alerts when your Kubernetes monitoring setup experiences issues
  • Get alerted on slow response times of your Kubernetes API

Set up monitoring errors

Pods stuck in Terminating state after upgrade

If your CSI driver and OneAgent pods get stuck in Terminating state after upgrading from Dynatrace Operator version 0.9.0, you need to manually delete the pods that are stuck.

Run the command below.

Unable to retrieve the complete list of server APIs

unable to retrieve the complete list of server APIs: external.metrics.k8s.io/v1beta1: the server is currently unable to handle the request

If the Dynatrace Operator pod logs this error, you need to identify and fix the problematic services. To identify them

  1. Check available resources.
  1. If the command returns this error, list all the API services and make sure there aren’t any False services.

CrashLoopBackOff: Downgrading OneAgent is not supported, please uninstall the old version first

If you get this error, the OneAgent version installed on your host is later than the version you’re trying to run.

Solution: First uninstall OneAgent from the host, and then select your desired version in the Dynatrace web UI or in DynaKube. To uninstall OneAgent, connect to the host and run the uninstall.sh script. (The default location is /opt/dynatrace/oneagent/agent/uninstall.sh )

Note: For CSI driver deployments, use the following commands instead:

  1. Delete the Dynakube custom resources.
  2. Delete the CSI driver manifest.
  3. Delete the /var/lib/kubelet/plugins/csi.oneagent.dynatrace.com directory from all Kubernetes nodes.
  4. Reapply the CSI driver and DynaKube custom resources.

Crash loop on pods when installing OneAgent

If you get a crash loop on the pods when you install OneAgent, you need to increase the CPU memory of the pods.

Deployment seems successful but the dynatrace-oneagent container doesn’t show up as ready

Change the value REPLACE_WITH_YOUR_URL in the dynatrace-oneagent.yml DaemonSet with the Dynatrace OneAgent installer URL.

This is typically the case if the dynatrace service account hasn’t been allowed to pull images from the RHCC.

Deployment seems successful, however the dynatrace-oneagent image can’t be pulled

This is typically the case if the dynatrace service account hasn’t been allowed to pull images from the RHCC.

Deployment seems successful, but the dynatrace-oneagent container doesn’t produce meaningful logs

This is typically the case if the container hasn’t yet fully started. Simply wait a few more seconds.

Deployment seems successful, but the dynatrace-oneagent container isn’t running

Please note that quotes are needed to protect the special shell characters in the OneAgent installer URL.

This is typically the case if the dynatrace service account hasn’t been configured to run privileged pods.

Deployment was successful, but monitoring data isn’t available in Dynatrace

This is typically caused by a timing issue that occurs if application containers have started before OneAgent was fully installed on the system. As a consequence, some parts of your application run uninstrumented. To be on the safe side, OneAgent should be fully integrated before you start your application containers. If your application has already been running, restarting its containers will have the very same effect.

No pods scheduled on control-plane nodes

Kubernetes version 1.24+

Taints on master and control plane nodes are changed on Kubernetes versions 1.24+, and the OneAgent DaemonSet is missing appropriate tolerations in the DynaKube custom resource.

To add the necessary tolerations, edit the DynaKube YAML as follows.

Error when applying the custom resource on GKE

If you are getting this error when trying to apply the custom resource on your GKE cluster, the firewall is blocking requests from the Kubernetes API to the Dynatrace Webhook because the required port (8443) is blocked by default.

The default allowed ports (443 and 10250) on GCP refer to the ports exposed by your nodes and pods, not the ports exposed by any Kubernetes services. For example, if the cluster control plane attempts to access a service on port 443 such as the Dynatrace webhook, but the service is implemented by a pod using port 8443, this is blocked by the firewall.

To fix this, add a firewall rule to explicitly allow ingress to port 8443.

CannotPullContainerError

If you get errors like this on your pods when installing Dynatrace OneAgent, your Docker download rate limit has been exceeded.

CannotPullContainerError: inspect image has been retried [X] time(s): httpReaderSeeker: failed open: unexpected status code

Limit log timeframe

Dynatrace Operator version 0.10.0+

If there’s DiskPressure on your nodes, you can configure the CSI driver log garbage collection interval to lower the storage usage of the CSI driver. The default value of keeping logs before they are deleted from the file system is 7 (days). To edit this timeframe, select one of the options below, depending on your deployment mode.

Be careful when setting this value; you might need the logs to investigate problems.

  1. Edit the manifests of the CSI driver daemonset ( kubernetes-csi.yaml , openshift-csi.yaml ), by replacing the placeholders ( ) with your value.
  1. Apply the changes.

Edit values.yaml to set the maxUnmountedVolumeAge parameter under the csidriver section.

Connectivity issues between Dynatrace and your cluster

Problem with ActiveGate token

Example error on the ActiveGate deployment status page:

Problem with ActiveGate token (reason:Absent)

Example error on Dynatrace Operator logs:

Example error on DynaKube status:

Starting Dynatrace Operator version 0.9.0, Dynatrace Operator handles the ActiveGate token by default. If you’re getting one of these errors, follow the instructions below, according to your Dynatrace Operator version.

  • For Dynatrace Operator versions earlier than 0.7.0: you need to upgrade to the latest Dynatrace Operator version.
  • For Dynatrace Operator version 0.7.0 or later, but earlier than version 0.9.0: you need to create a new API token. For instructions, see Tokens and permissions required: Dynatrace Operator token.

ImagePullBackoff error on OneAgent and ActiveGate pods

The underlying host’s container runtime doesn’t contain the certificate presented by your endpoint.

Note: The skipCertCheck field in the DynaKube YAML does not control this certificate check.

Example error (the error message may vary):

In this example, if the description on your pod shows x509: certificate signed by unknown authority , you must fix the certificates on your Kubernetes hosts, or use the private repository configuration to store the images.

There was an error with the TLS handshake

The certificate for the communication is invalid or expired. If you’re using a self-signed certificate, check the mitigation procedures for the ActiveGate.

Invalid bearer token

The bearer token is invalid and the request has been rejected by the Kubernetes API. Verify the bearer token. Make sure it doesn’t contain any whitespaces. If you’re connecting to a Kubernetes cluster API via a centralized external role-based access control (RBAC), consult the documentation of the Kubernetes cluster manager. For Rancher, see the guidelines on the official Rancher website.

Could not check credentials. Process is started by other user

There is already a request pending for this integration with an ActiveGate. Wait for a couple minutes and check back.

Internal error occurred: failed calling webhook (…) x509: certificate signed by unknown authority

If you get this error after applying the DynaKube custom resource, your Kubernetes API server may be configured with a proxy. You need to exclude https://dynatrace-webhook.dynatrace.svc from that proxy.

OneAgent unable to connect when using Istio

cloudNativeFullStack applicationMonitoring Example error in the logs on the OneAgent pods: Initial connect: not successful — retrying after xs . You can fix this problem by increasing the OneAgent timeout. Add the following feature flag to DynaKube: Note: Be sure to replace the placeholder ( ) with the name of your DynaKube custom resource.

Connectivity issues when using Calico

If you use Calico to handle or restrict network connections, you might experience connectivity issues, such as:

  • The operator, webhook, and CSI driver pods are constantly restarting
  • The operator cannot reach the API
  • The CSI driver fails to download OneAgent
  • Injection into pods doesn’t work

If you experience these or similar problems, use our GitHub sample policies for common problems.

Notes:

  • For the activegate-policy.yaml and dynatrace-policies.yaml policies, if Dynatrace Operator isn’t installed in the dynatrace namespace (Kubernetes) or project (OpenShift), you need to adapt the metadata and namespace properties in the YAML files accordingly.
  • The purpose of the agent-policy.yaml and agent-policy-external-only.yaml policies is to let OneAgents that are injected into pods open external connections. Only agent-policy-external-only.yaml is required, while agent-policy.yaml allows internal connections to be made, such as pod-to-pod connections, where needed.
  • Because these policies are needed for all pods where OneAgent injects, you also need to adapt the podSelector property of the YAML files.

Potential issues when changing the monitoring mode

  • Changing the monitoring mode from classicFullStack to cloudNativeFullStack affects the host ID calculations for monitored hosts, leading to new IDs being assigned and no connection between old and new entities.
  • If you want to change the monitoring method from applicationMonitoring or cloudNativeFullstack to classicFullstack or hostMonitoring , you need to restart all the pods that were previously instrumented with applicationMonitoring or cloudNativeFullstack .

Monitor Kubernetes/OpenShift with Dynatrace.

Источник

I have kubeadm cluster deployed in CentOS VM. while trying to deploy ingress controller following github i noticed that i’m unable to see logs:

kubectl logs -n ingress-nginx nginx-ingress-controller-697f7c6ddb-x9xkh --previous

Error from server: Get https://192.168.56.34:10250/containerLogs/ingress-nginx/nginx-ingress-controller-697f7c6ddb-x9xkh/nginx-ingress-controller?previous=true: dial tcp 192.168.56.34:10250: getsockopt: connection timed out

In 192.168.56.34 (node1) netstat returns:

tcp6       0      0 :::10250                :::*                    LISTEN      1068/kubelet

In fact i’m unable to see any logs despite the status of the pod.

I disabled both the firewalld and SELinux.

I used proxy to enable kubernertes to download images, now i removed the proxy.

When navigating to the url in the error above i get Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)

I’m also able to fetch my nodes:

kubectl get node
NAME         STATUS     ROLES     AGE       VERSION
k8s-master   Ready      master    32d       v1.9.3
k8s-node1    Ready      <none>    30d       v1.9.3
k8s-node2    NotReady   <none>    32d       v1.9.3

Comments

@myfear

Navigating to a deployment and switching over to the «Logs» tap does show a back box with a little text underneath:

«An error occurred loading the log. Reload»

oc get logs works.

Tested with «admin/admin» and the user having access to the logs via the oc command.

Anything I can attach here to make it easier to search for?

Thanks,
M

@deads2k

@jwforres

@myfear what phase/status is the deployment in when you are trying to look at the logs? Can you look in the network tab and see if it is failing to open the websocket connection? Chrome’s network tab deals with websockets better than firefox so i recommend debugging there.

@jwforres

@myfear

@jwforres status: deployed

Chrome console network log:

imagestreams (subscribe) Object {swarm-sample-discovery: Object}
builds (subscribe) Object {}
WebSocket connection to ‘wss://localhost:8443/oapi/v1/namespaces/myfear/deploymentconfigs/swarm/log?…0485760&version=5&access_token=rY8UqUQo5fdoFYYK6Gklhrr2SW6gwKUcCSsZ5J9LAX4’ failed: Error during WebSocket handshake: Unexpected response code: 204

@jwforres

So this was a successful deployment, the pod for successful deployments is reaped very quickly, and then the logs are no longer available. If you ran oc again right now, I expect you would not get any logs.

We know its a usability issue that can’t be solved well until we have a passthrough API to the ELK stack.

cc @smarterclayton

@myfear

Agree. I have to admit that I was actually looking for the running pod and not specifically the deployment pod. Even if I personally believe, that the usability issue here is huge.
Thanks for the clarification!

@jwforres

That’s really helpful feedback. @spadgett we probably need to do something better to guide users to the deployment’s pods from the deployment page. Also maybe when you go to the deployment log tab for the active deployment we should show an info box, something like «If you want to see the logs for the pods created by this deployment do xyz»

@spadgett

Spoke with @jwforres. I’d like add a dropdown on the Logs tab so that you can see logs from any pod in the deployment in addition to the deployer pod. If we were getting the pod list, it would also help us show a better message when the deployer logs aren’t available.

@smarterclayton

@spadgett

The logs now show pod logs for an active deployment.

deployment-logs

@jwforres

So remaining issue is just that we don’t link to the pods from the deployment today in case you wanted to get the logs from other pods in the deployment (since the API just takes the first one’s logs). So that will be the fix in #7134

@spadgett

I opened a separate issue for linking from deployment to pods. (It’s not clear when #7134 will be merged.) Closing since the original deployment log issue is fixed.

  • Remove From My Forums
  • Question

  • I am getting the below error when we try the image deployment using boot image,

    trying the deployment on a site with only have the DP and i force that DP to assigned to secondary site,

    Created a boundary group and added the DP and secondary site to that group and enable site assignment for this group.

     SMSTS.log

    <![LOG[Using user-defined MP locations: http://Server Name]LOG]!><time=»02:19:43.625+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»1″ thread=»1660″ file=»tsmediawizardcontrol.cpp:914″>
    <![LOG[Set authenticator in transport]LOG]!><time=»02:19:43.625+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»0″ thread=»1660″ file=»libsmsmessaging.cpp:7734″>
    <![LOG[Set media certificates in transport]LOG]!><time=»02:19:43.657+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»1″ thread=»1660″ file=»libsmsmessaging.cpp:9540″>
    <![LOG[IP: 10.159.168.191 10.159.168.0]LOG]!><time=»02:19:43.672+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»1″ thread=»1660″ file=»libsmsmessaging.cpp:9561″>
    <![LOG[CLibSMSMessageWinHttpTransport::Send: URL: Server Name:80  GET /SMS_MP/.sms_aut?MPLOCATION&ir=10.159.168.191&ip=10.159.168.0]LOG]!><time=»02:19:43.672+480″ date=»07-27-2015″ component=»TSMBootstrap»
    context=»» type=»1″ thread=»1660″ file=»libsmsmessaging.cpp:8604″>
    <![LOG[Request was successful.]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»0″ thread=»1660″ file=»libsmsmessaging.cpp:8939″>
    <![LOG[pwsSig != NULL, HRESULT=80004005 (e:qfentssmsframeworkosdmessaginglibsmsmessaging.cpp,5592)]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»0″
    thread=»1660″ file=»libsmsmessaging.cpp:5592″>
    <![LOG[Invalid MP cert info; no signature]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»3″ thread=»1660″ file=»libsmsmessaging.cpp:5592″>
    <![LOG[CCM::SMSMessaging::CLibSMSMPLocation::RequestMPLocation failed; 0x80004005]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»3″ thread=»1660″
    file=»libsmsmessaging.cpp:5688″>
    <![LOG[MPLocation.RequestMPLocation (szTrustedRootKey, sIPSubnets.c_str(), sIPAddresses.c_str(), httpS, http), HRESULT=80004005 (e:qfentssmsframeworkosdmessaginglibsmsmessaging.cpp,9565)]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″
    component=»TSMBootstrap» context=»» type=»0″ thread=»1660″ file=»libsmsmessaging.cpp:9565″>
    <![LOG[CCM::SMSMessaging::GetMPLocations failed; 0x80004005]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»3″ thread=»1660″ file=»libsmsmessaging.cpp:9569″>
    <![LOG[Failed to query http://Server Name for MP location]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»3″ thread=»1660″ file=»tsmbootstraputil.cpp:1874″>
    <![LOG[MpCnt > 0, HRESULT=80004005 (e:qfentssmsclienttasksequencetsmbootstraptsmbootstraputil.cpp,1931)]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»0″
    thread=»1660″ file=»tsmbootstraputil.cpp:1931″>
    <![LOG[QueryMPLocator: no valid MP locations are received]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»3″ thread=»1660″ file=»tsmbootstraputil.cpp:1931″>
    <![LOG[TSMBootstrapUtil::QueryMPLocator ( true, sSMSTSLocationMPs.c_str(), sMediaPfx.c_str(), sMediaGuid.c_str(), sAuthenticator.c_str(), sEnterpriseCert.c_str(), sServerCerts.c_str(), nHttpPort, nHttpsPort, bUseCRL, httpS, http, accessibleMpCnt),
    HRESULT=80004005 (e:qfentssmsclienttasksequencetsmbootstraptsmediawizardcontrol.cpp,925)]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»0″
    thread=»1660″ file=»tsmediawizardcontrol.cpp:925″>
    <![LOG[Exiting TSMediaWizardControl::GetPolicy.]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap» context=»» type=»0″ thread=»1660″ file=»tsmediawizardcontrol.cpp:1420″>
    <![LOG[pWelcomePage->m_pTSMediaWizardControl->GetPolicy(), HRESULT=80004005 (e:qfentssmsclienttasksequencetsmbootstraptsmediawelcomepage.cpp,303)]LOG]!><time=»02:19:46.274+480″ date=»07-27-2015″ component=»TSMBootstrap»
    context=»» type=»0″ thread=»1660″ file=»tsmediawelcomepage.cpp:303″>

Answers

  • They problem has been fixed, 

    Static boot image works fine, the problem is, there was one task sequence created as
    required hence all unknown machine image deployment use that task sequence. 

    After change the purpose from required to available in the task sequence the problem has been fixed.

    Thank you all for your update

    • Marked as answer by

      Tuesday, August 4, 2015 4:56 AM

  • Determining where installation issues occur
  • User-provisioned infrastructure installation considerations
  • Checking a load balancer configuration before OpenShift Container Platform installation
  • Specifying OpenShift Container Platform installer log levels
  • Troubleshooting openshift-install command issues
  • Monitoring installation progress
  • Gathering bootstrap node diagnostic data
  • Investigating control plane node installation issues
  • Investigating etcd installation issues
  • Investigating control plane node kubelet and API server issues
  • Investigating worker node installation issues
  • Querying Operator status after installation
  • Gathering logs from a failed installation
  • Additional resources

Determining where installation issues occur

When troubleshooting OpenShift Container Platform installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage.

OpenShift Container Platform installation proceeds through the following stages:

  1. Ignition configuration files are created.

  2. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines (also known as the master machines) to boot.

  3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting.

  4. The control plane machines use the bootstrap machine to form an etcd cluster.

  5. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.

  6. The temporary control plane schedules the production control plane to the control plane machines.

  7. The temporary control plane shuts down and passes control to the production control plane.

  8. The bootstrap machine adds OpenShift Container Platform components into the production control plane.

  9. The installation program shuts down the bootstrap machine.

  10. The control plane sets up the worker nodes.

  11. The control plane installs additional services in the form of a set of Operators.

  12. The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments.

User-provisioned infrastructure installation considerations

The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure.

You can alternatively install OpenShift Container Platform 4.8 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation:

  • Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or virtualization technology.

  • Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set.

  • Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling.

    It is not possible to enable cloud provider integration in OpenShift Container Platform environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration.

  • A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OpenShift Container Platform cluster nodes.

  • Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed.

  • Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured.

  • A load balancer is required to distribute API requests across all control plane nodes (also known as the master nodes) in highly available OpenShift Container Platform environments. You can use any TCP-based load balancing solution that meets OpenShift Container Platform DNS routing and port requirements.

Checking a load balancer configuration before OpenShift Container Platform installation

Check your load balancer configuration prior to starting an OpenShift Container Platform installation.

Prerequisites

  • You have configured an external load balancer of your choosing, in preparation for an OpenShift Container Platform installation. The following example is based on a Red Hat Enterprise Linux (RHEL) host using HAProxy to provide load balancing services to a cluster.

  • You have configured DNS in preparation for an OpenShift Container Platform installation.

  • You have SSH access to your load balancer.

Procedure

  1. Check that the haproxy systemd service is active:

    $ ssh <user_name>@<load_balancer> systemctl status haproxy
  2. Verify that the load balancer is listening on the required ports. The following example references ports 80, 443, 6443, and 22623.

    • For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by using the netstat command:

      $ ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'
    • For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port status by using the ss command:

      $ ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'
  3. Check that the wildcard DNS record resolves to the load balancer:

    $ dig <wildcard_fqdn> @<dns_server>

Specifying OpenShift Container Platform installer log levels

By default, the OpenShift Container Platform installer log level is set to info. If more detailed logging is required when diagnosing a failed OpenShift Container Platform installation, you can increase the openshift-install log level to debug when starting the installation again.

Prerequisites

  • You have access to the installation host.

Procedure

  • Set the installation log level to debug when initiating the installation:

    $ ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug  (1)
    1 Possible log levels include info, warn, error, and debug.

Troubleshooting openshift-install command issues

If you experience issues running the openshift-install command, check the following:

  • The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run:

    $ ./openshift-install create ignition-configs --dir=./install_dir
  • The install-config.yaml file is in the same directory as the installer. If an alternative installation path is declared by using the ./openshift-install --dir option, verify that the install-config.yaml file exists within that directory.

Monitoring installation progress

You can monitor high-level installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the bootstrap and control plane nodes (also known as the master nodes).

    The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host.

Procedure

  1. Watch the installation log as the installation progresses:

    $ tail -f ~/<installation_directory>/.openshift_install.log
  2. Monitor the bootkube.service journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  3. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.

    1. Monitor the logs using oc:

      $ oc adm node-logs --role=master -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service
  4. Monitor crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.

    1. Monitor the logs using oc:

      $ oc adm node-logs --role=master -u crio
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@master-N.cluster_name.sub_domain.domain journalctl -b -f -u crio.service

Gathering bootstrap node diagnostic data

When experiencing bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.

Prerequisites

  • You have SSH access to your bootstrap node.

  • You have the fully qualified domain name of the bootstrap node.

  • If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.

Procedure

  1. If you have access to the bootstrap node’s console, monitor the console until the node reaches the login prompt.

  2. Verify the Ignition file configuration.

    • If you are hosting Ignition configuration files by using an HTTP server.

      1. Verify the bootstrap node Ignition file URL. Replace <http_server_fqdn> with HTTP server’s fully qualified domain name:

        $ curl -I http://<http_server_fqdn>:<port>/bootstrap.ign  (1)
        1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found.
      2. To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command:

        $ grep -is 'bootstrap.ign' /var/log/httpd/access_log

        If the bootstrap Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded.

      3. If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly.

    • If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.

      1. Review the bootstrap node’s console to determine if the mechanism is injecting the bootstrap node Ignition file correctly.

  3. Verify the availability of the bootstrap node’s assigned storage device.

  4. Verify that the bootstrap node has been assigned an IP address from the DHCP server.

  5. Collect bootkube.service journald unit logs from the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

    $ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service

    The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes (also known as the master nodes). After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.

  6. Collect logs from the bootstrap node containers.

    1. Collect the logs using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

      $ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
  7. If the bootstrap process fails, verify the following.

    • You can resolve api.<cluster_name>.<base_domain> from the installation host.

    • The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OpenShift Container Platform installation requirements.

Investigating control plane node installation issues

If you experience control plane node (also known as the master node)installation issues, determine the control plane node OpenShift Container Platform software defined network (SDN), and network Operator status. Collect kubelet.service, crio.service journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the bootstrap and control plane nodes.

  • If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.

    The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host.

Procedure

  1. If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.

  2. Verify Ignition file configuration.

    • If you are hosting Ignition configuration files by using an HTTP server.

      1. Verify the control plane node Ignition file URL. Replace <http_server_fqdn> with HTTP server’s fully qualified domain name:

        $ curl -I http://<http_server_fqdn>:<port>/master.ign  (1)
        1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found.
      2. To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files:

        $ grep -is 'master.ign' /var/log/httpd/access_log

        If the master Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded.

      3. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.

    • If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.

      1. Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly.

  3. Check the availability of the storage device assigned to the control plane node.

  4. Verify that the control plane node has been assigned an IP address from the DHCP server.

  5. Determine control plane node status.

    1. Query control plane node status:

    2. If one of the control plane nodes does not reach a Ready status, retrieve a detailed node description:

      $ oc describe node <master_node>

      It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node:

  6. Determine OpenShift Container Platform SDN status.

    1. Review sdn-controller, sdn, and ovs daemon set status, in the openshift-sdn namespace:

      $ oc get daemonsets -n openshift-sdn
    2. If those resources are listed as Not found, review pods in the openshift-sdn namespace:

      $ oc get pods -n openshift-sdn
    3. Review logs relating to failed OpenShift Container Platform SDN pods in the openshift-sdn namespace:

      $ oc logs <sdn_pod> -n openshift-sdn
  7. Determine cluster network configuration status.

    1. Review whether the cluster’s network configuration exists:

      $ oc get network.config.openshift.io cluster -o yaml
    2. If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output:

      $ ./openshift-install create manifests
    3. Review the pod status in the openshift-network-operator namespace to determine whether the Cluster Network Operator (CNO) is running:

      $ oc get pods -n openshift-network-operator
    4. Gather network Operator pod logs from the openshift-network-operator namespace:

      $ oc logs pod/<network_operator_pod_name> -n openshift-network-operator
  8. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=master -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service

      OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  9. Retrieve crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=master -u crio
    2. If the API is not functional, review the logs using SSH instead:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
  10. Collect logs from specific subdirectories under /var/log/ on control plane nodes.

    1. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver
    2. Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes:

      $ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
  11. Review control plane node container logs using SSH.

    1. List the containers:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a
    2. Retrieve a container’s logs using crictl:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
  12. If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.

    1. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values:

      $ curl https://api-int.<cluster_name>:22623/config/master
    2. If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.

    3. Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.

      1. Run a DNS lookup for the defined MCO endpoint name:

        $ dig api-int.<cluster_name> @<dns_server>
      2. Run a reverse lookup to the assigned MCO IP address on the load balancer:

        $ dig -x <load_balancer_mco_ip_address> @<dns_server>
    4. Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

      $ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master
    5. System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:

      $ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
    6. Review certificate validity:

      $ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text

Investigating etcd installation issues

If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes (also known as the master nodes).

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the control plane nodes.

Procedure

  1. Check the status of etcd pods.

    1. Review the status of pods in the openshift-etcd namespace:

      $ oc get pods -n openshift-etcd
    2. Review the status of pods in the openshift-etcd-operator namespace:

      $ oc get pods -n openshift-etcd-operator
  2. If any of the pods listed by the previous commands are not showing a Running or a Completed status, gather diagnostic information for the pod.

    1. Review events for the pod:

      $ oc describe pod/<pod_name> -n <namespace>
    2. Inspect the pod’s logs:

      $ oc logs pod/<pod_name> -n <namespace>
    3. If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container:

      $ oc logs pod/<pod_name> -c <container_name> -n <namespace>
  3. If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values.

    1. List etcd pods on each control plane node:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-
    2. For any pods not showing Ready status, inspect pod status in detail. Replace <pod_id> with the pod’s ID listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>
    3. List containers related to a pod:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'
    4. For any containers not showing Ready status, inspect container status in detail. Replace <container_id> with container IDs listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>
    5. Review the logs for any containers not showing a Ready status. Replace <container_id> with the container IDs listed in the output of the preceding command:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>

      OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  4. Validate primary and secondary DNS server connectivity from control plane nodes.

Investigating control plane node kubelet and API server issues

To investigate control plane node (also known as the master node) kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the control plane nodes.

Procedure

  1. Verify that the API server’s DNS record directs the kubelet on control plane nodes to https://api-int.<cluster_name>.<base_domain>:6443. Ensure that the record references the load balancer.

  2. Ensure that the load balancer’s port 6443 definition references each control plane node.

  3. Check that unique control plane node hostnames have been provided by DHCP.

  4. Inspect the kubelet.service journald unit logs on each control plane node.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=master -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service

      OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  5. Check for certificate expiration messages in the control plane node kubelet logs.

    1. Retrieve the log using oc:

      $ oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'
    2. If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service  | grep -is 'x509: certificate has expired'

Investigating worker node installation issues

If you experience worker node installation issues, you can review the worker node status. Collect kubelet.service, crio.service journald unit logs and the worker node container logs for visibility into the worker node agent, CRI-O container runtime and pod activity. Additionally, you can check the Ignition file and Machine API Operator functionality. If worker node post-installation configuration fails, check Machine Config Operator (MCO) and DNS functionality. You can also verify system clock synchronization between the bootstrap, master, and worker nodes, and validate certificates.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have SSH access to your hosts.

  • You have the fully qualified domain names of the bootstrap and worker nodes.

  • If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server’s fully qualified domain name and the port number. You must also have SSH access to the HTTP host.

    The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host.

Procedure

  1. If you have access to the worker node’s console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console.

  2. Verify Ignition file configuration.

    • If you are hosting Ignition configuration files by using an HTTP server.

      1. Verify the worker node Ignition file URL. Replace <http_server_fqdn> with HTTP server’s fully qualified domain name:

        $ curl -I http://<http_server_fqdn>:<port>/worker.ign  (1)
        1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found.
      2. To verify that the Ignition file was received by the worker node, query the HTTP server logs on the HTTP host. For example, if you are using an Apache web server to serve Ignition files:

        $ grep -is 'worker.ign' /var/log/httpd/access_log

        If the worker Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded.

      3. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place.

    • If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment.

      1. Review the worker node’s console to determine if the mechanism is injecting the worker node Ignition file correctly.

  3. Check the availability of the worker node’s assigned storage device.

  4. Verify that the worker node has been assigned an IP address from the DHCP server.

  5. Determine worker node status.

    1. Query node status:

    2. Retrieve a detailed node description for any worker nodes not showing a Ready status:

      $ oc describe node <worker_node>

      It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node.

  6. Unlike control plane nodes (also known as the master nodes), worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator.

    1. Review Machine API Operator pod status:

      $ oc get pods -n openshift-machine-api
    2. If the Machine API Operator pod does not have a Ready status, detail the pod’s events:

      $ oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api
    3. Inspect machine-api-operator container logs. The container runs within the machine-api-operator pod:

      $ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator
    4. Also inspect kube-rbac-proxy container logs. The container also runs within the machine-api-operator pod:

      $ oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy
  7. Monitor kubelet.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node agent activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=worker -u kubelet
    2. If the API is not functional, review the logs using SSH instead. Replace <worker-node>.<cluster_name>.<base_domain> with appropriate values:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service

      OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain>.

  8. Retrieve crio.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node CRI-O container runtime activity.

    1. Retrieve the logs using oc:

      $ oc adm node-logs --role=worker -u crio
    2. If the API is not functional, review the logs using SSH instead:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service
  9. Collect logs from specific subdirectories under /var/log/ on worker nodes.

    1. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/sssd/ on all worker nodes:

      $ oc adm node-logs --role=worker --path=sssd
    2. Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/sssd/audit.log contents from all worker nodes:

      $ oc adm node-logs --role=worker --path=sssd/sssd.log
    3. If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/sssd/sssd.log:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log
  10. Review worker node container logs using SSH.

    1. List the containers:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a
    2. Retrieve a container’s logs using crictl:

      $ ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>
  11. If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity.

    1. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values:

      $ curl https://api-int.<cluster_name>:22623/config/worker
    2. If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623.

    3. Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.

      1. Run a DNS lookup for the defined MCO endpoint name:

        $ dig api-int.<cluster_name> @<dns_server>
      2. Run a reverse lookup to the assigned MCO IP address on the load balancer:

        $ dig -x <load_balancer_mco_ip_address> @<dns_server>
    4. Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:

      $ ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker
    5. System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node’s system clock reference time and time synchronization statistics:

      $ ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking
    6. Review certificate validity:

      $ openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text

Querying Operator status after installation

You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as Pending or have an error status. Validate base images used by problematic pods.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Check that cluster Operators are all available at the end of an installation.

    $ oc get clusteroperators
  2. Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes might not move to a Ready status and some cluster Operators might not become available if there are pending CSRs.

    1. Check the status of the CSRs and ensure that you see a client and server request with the Pending or Approved status for each machine that you added to the cluster:

      Example output

      NAME        AGE     REQUESTOR                                                                   CONDITION
      csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending (1)
      csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
      csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending (2)
      csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
      ...
      1 A client request CSR.
      2 A server request CSR.

      In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

    2. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

      Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager.

      For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

      • To approve them individually, run the following command for each valid CSR:

        $ oc adm certificate approve <csr_name> (1)
        1 <csr_name> is the name of a CSR from the list of current CSRs.
      • To approve all pending CSRs, run the following command:

        $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  3. View Operator events:

    $ oc describe clusteroperator <operator_name>
  4. Review Operator pod status within the Operator’s namespace:

    $ oc get pods -n <operator_namespace>
  5. Obtain a detailed description for pods that do not have Running status:

    $ oc describe pod/<operator_pod_name> -n <operator_namespace>
  6. Inspect pod logs:

    $ oc logs pod/<operator_pod_name> -n <operator_namespace>
  7. When experiencing pod base image related issues, review base image status.

    1. Obtain details of the base image used by a problematic pod:

      $ oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'t'}{.state}{'t'}{.image}{'n'}{end}" <operator_pod_name> -n <operator_namespace>
    2. List base image release information:

      $ oc adm release info <image_path>:<tag> --commits

Gathering logs from a failed installation

If you gave an SSH key to your installation program, you can gather data about
your failed installation.

You use a different command to gather logs about an unsuccessful installation
than to gather logs from a running cluster. If you must gather logs from a
running cluster, use the oc adm must-gather command.

Prerequisites

  • Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH.

  • The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program.

  • If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes (also known as the master nodes).

Procedure

  1. Generate the commands that are required to obtain the installation logs from
    the bootstrap and control plane machines:

    • If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command:

      $ ./openshift-install gather bootstrap --dir <installation_directory> (1)
      1 installation_directory is the directory you specified when you ran ./openshift-install create cluster. This directory contains the OpenShift Container Platform
      definition files that the installation program creates.

      For installer-provisioned infrastructure, the installation program stores
      information about the cluster, so you do not specify the hostnames or IP
      addresses.

    • If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following
      command:

      $ ./openshift-install gather bootstrap --dir <installation_directory>  (1)
          --bootstrap <bootstrap_address>  (2)
          --master <master_1_address>  (3)
          --master <master_2_address>  (3)
          --master <master_3_address>" (3)
      
      1 For installation_directory, specify the same directory you specified when you ran ./openshift-install create cluster. This directory contains the OpenShift Container Platform
      definition files that the installation program creates.
      2 <bootstrap_address> is the fully qualified domain name or IP address of
      the cluster’s bootstrap machine.
      3 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address.

      A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses.

    Example output

    INFO Pulling debug logs from the bootstrap machine
    INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz"

    If you open a Red Hat support case about your installation failure, include
    the compressed logs in the case.

Additional resources

  • See Installation process for more details on OpenShift Container Platform installation types and process.

Whether you’re looking for a quick fix for something or gearing up for future troubleshooting, all of these are pretty standard errors you’ll run into as you’re developing on OpenShift. Below are the 10 most common ones I’ve seen when working with developers who are getting started with the platform.

But first:

Where to Look for Error Info

Pod/Container Logs

If your build or deployment started and failed halfway through, this is the best place to start. You can see build logs by looking here: 

You can see deployment logs by looking at the specific deployment and either looking at that deployment’s logs or the pod’s logs directly. Click on the «1 pod» section to find that deployment’s pods, then click «Logs.»

Monitoring/Events

Most OpenShift objects include an «Events» tab so you can watch new events as they happen. You can also see all of the events happening in the project by clicking on «Monitoring» in the sidebar.

Most of the time, errors will be visible in either of those locations.

10 Common Errors

1. Missing configmap/secret/volume in deployment config

This will appear as a «RunContainerError» when your pods are attempting to spin up. If the required ConfigMap/Secret is missing, or if the key you’re looking for in a ConfigMap/Secret is missing, you’ll see this error under «Events.»

2. health check using the wrong port

This one is a bit harder to find generally, but if your application looks like it has spun up fine with no errors and then appears as Failed with the pods constantly restarting, the liveness probe might be hitting the wrong port. Your readiness probe should also hit the correct port, but it won’t restart the pod if it fails (the pod will just appear as «not ready»).

3. missing build secret for authenticating with source repo

If you’re seeing a Fetch source failed error when you try to build, you might need to set up a build secret to authenticate with your Git repo. This will either be a username and password (new-basicauth secret) or an SSH key (new-sshauth secret) depending on the URL.

4. PROJECT QUOTA EXCEEDED

The system admins for your OCP cluster usually set project quotas to keep individual projects from taking up too many resources. If you’ve already reached your project quota, trying to deploy a new container will fail. You can decrease replicas for other containers, reduce the resource requests/limits for each service, or get the OCP admins to increase your project quota.

5. rESOURCES OUTSIDE OF REQUEST/LIMIT RATIO

In addition to project quotas, sometimes OCP admins will add limits on what individual pods can request in terms of CPU and memory. Sometimes you’ll be inside the limit range for the pod but you’ll still get an error about your resource request/limit. This is because there can also be max/min ratios set on pod resources that require your request and limit values to be within a certain ratio.

6. build terminates with exit code 137

I really only see this on Maven builds, but it’s not limited to that. This is an Out Of Memory (OOM) error while trying to build. Increase the memory request and limit on your BuildConfig and this should go away. 

7. image pull from external registry instead of internal

If you’re seeing an error that looks something like this:

There are a couple reasons that this could be happening. In most cases, you probably aren’t trying to get the «hello-world» image from registry.access.redhat.com but want to retrieve it from the internal OCP registry instead. If this is the case, you should take a look at your ImageChange Trigger in the DeploymentConfig and make sure that it is properly set up to update to the latest image when a new one is pushed to the internal registry.

To fix your current failed deployment, you’ll need to get the full docker pull spec for the image and copy that into your DeploymentConfig. Go to the ImageStream for the image you want and click «Actions» and «Edit YAML»:

In the YAML, search for «dockerImageReference» and copy its value. It will look something like: 

Then paste this into your DeploymentConfig for that container:

This will ensure that your DeploymentConfig is pulling the correct image from the local registry instead of going out to an external registry.

8. environment variables are «invalid»

If you try to use «oc apply» to update an environment variable from a name/value pair to a «valueFrom» retrieved from a ConfigMap or Secret, or vice versa, you’ll get this error:

The DeploymentConfig «hello-world» is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid value: «»: may not be specified when `value` is not empty

There are a couple bug reports out for this error, but there isn’t a fix as of this post. The easiest way to get rid of this error is to delete all of the environment variables for your DeploymentConfig and run the update again, or update them manually in the console rather than running an «oc apply.»

9. Deployment config always appears as «canceled»

The deployment number causes this error. In the above example, we’re currently running deployment #6, but somehow deployments #1 and #2 are more recent. The latest deployment/replication controller must always have the highest number. The reset to #1 generally happens if the DeploymentConfig is deleted and recreated with «oc delete»/»oc create» or «oc replace.» If this happens, the quickest way to get the new deployments running again is to delete all of the previous replication controllers and re-deploy.

10. build succeeds but fails to push image

Private registries sometimes require image push or image pull secrets for security purposes. If a BuildConfig doesn’t include this secret or includes the wrong one for a secured registry, you’ll see this error:

You can fix this by editing your BuildConfig and choosing «Show advanced options» to choose the right image push/pull secrets.

Понравилась статья? Поделить с друзьями:
  • An error occurred while removing the multi edition key from the blackboard 00000490
  • An error occurred while reading the key ring
  • An error occurred while reading the flp file что делать fl studio
  • An error occurred while reading the flp file как исправить
  • An error occurred while reading the file see the log file for more details daz