Http error 401 try running pcs cluster auth

I want to create a 3 node cluster with pcs in debian stretch, after authorized all nodes. when I try add nodes in cluster I'm getting this error. asif@db01:~$ sudo pcs --version 0.9.155 step 1;...

I want to create a 3 node cluster with pcs in debian stretch, after authorized all nodes. when I try add nodes in cluster I’m getting this error.

asif@db01:~$ sudo pcs —version
0.9.155

step 1;

sudo pcs cluster auth 192.168.101.11 192.168.101.12 192.168.101.13 -u hacluster --debug
Password: 
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb auth
--Debug Input Start--
{"username": "hacluster", "local": false, "nodes": ["192.168.101.12", "192.168.101.13", "192.168.101.11"], "password": "1234", "force": false}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
    "auth_responses": {
      "192.168.101.11": {
        "status": "ok",
        "token": "f9d6d677-ff13-4d36-98bb-b87fc367580d"
      },
      "192.168.101.13": {
        "status": "ok",
        "token": "20d052f9-ced2-493b-918d-eb8b0efea357"
      },
      "192.168.101.12": {
        "status": "ok",
        "token": "ba7322ed-230b-4e85-954e-ddc355cbf382"
      }
    },
    "sync_successful": true,
    "sync_nodes_err": [

    ],
    "sync_responses": {
    }
  },
  "log": [
    "I, [2017-11-24T13:50:02.619145 #21081]  INFO -- : PCSD Debugging enabledn",
    "D, [2017-11-24T13:50:02.619184 #21081] DEBUG -- : Did not detect RHEL 6n",
    "I, [2017-11-24T13:50:02.619208 #21081]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2017-11-24T13:50:02.619224 #21081]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2017-11-24T13:50:02.623700 #21081] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2017-11-24T13:50:02.623758 #21081] DEBUG -- : []n",
    "D, [2017-11-24T13:50:02.623780 #21081] DEBUG -- : Duration: 0.004468055sn",
    "I, [2017-11-24T13:50:02.623846 #21081]  INFO -- : Return Value: 0n",
    "I, [2017-11-24T13:50:02.624104 #21081]  INFO -- : SRWT Node: 192.168.101.13 Request: check_authn",
    "E, [2017-11-24T13:50:02.624127 #21081] ERROR -- : Unable to connect to node 192.168.101.13, no token availablen",
    "I, [2017-11-24T13:50:02.624182 #21081]  INFO -- : SRWT Node: 192.168.101.12 Request: check_authn",
    "E, [2017-11-24T13:50:02.624197 #21081] ERROR -- : Unable to connect to node 192.168.101.12, no token availablen",
    "I, [2017-11-24T13:50:02.624232 #21081]  INFO -- : SRWT Node: 192.168.101.11 Request: check_authn",
    "E, [2017-11-24T13:50:02.624245 #21081] ERROR -- : Unable to connect to node 192.168.101.11, no token availablen",
    "I, [2017-11-24T13:50:02.712490 #21081]  INFO -- : Running: /usr/sbin/pcs status nodes corosyncn",
    "I, [2017-11-24T13:50:02.712582 #21081]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2017-11-24T13:50:02.942045 #21081] DEBUG -- : ["Corosync Nodes:\n", " Online:\n", " Offline:\n"]n",
    "D, [2017-11-24T13:50:02.942120 #21081] DEBUG -- : []n",
    "D, [2017-11-24T13:50:02.942141 #21081] DEBUG -- : Duration: 0.229461048sn",
    "I, [2017-11-24T13:50:02.942204 #21081]  INFO -- : Return Value: 0n",
    "I, [2017-11-24T13:50:02.942493 #21081]  INFO -- : Sending config 'tokens' version 1 efb121ae8ede7b957173795a7ffeaf598fcd5469 to nodes: n"
  ]
}
--Debug Output End--

192.168.101.12: Authorized
192.168.101.13: Authorized
192.168.101.11: Authorized

step 2;

sudo pcs cluster setup --name pg_cluster 192.168.101.11 192.168.101.12 192.168.101.13 --start --enable --debug
Running: /usr/sbin/corosync -v

Finished running: /usr/sbin/corosync -v
Return value: 0
--Debug Stdout Start--
Corosync Cluster Engine, version '2.4.2'
Copyright (c) 2006-2009 Red Hat, Inc.

--Debug Stdout End--
--Debug Stderr Start--

--Debug Stderr End--

Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2017-11-24T13:50:13.652610 #21174]  INFO -- : PCSD Debugging enabledn",
    "D, [2017-11-24T13:50:13.652650 #21174] DEBUG -- : Did not detect RHEL 6n",
    "I, [2017-11-24T13:50:13.652677 #21174]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2017-11-24T13:50:13.652691 #21174]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2017-11-24T13:50:13.657279 #21174] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2017-11-24T13:50:13.657342 #21174] DEBUG -- : []n",
    "D, [2017-11-24T13:50:13.657390 #21174] DEBUG -- : Duration: 0.004577442sn",
    "I, [2017-11-24T13:50:13.657441 #21174]  INFO -- : Return Value: 0n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://192.168.101.11:2224/remote/node_available
Data: None
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}--Debug Response End--
Error: 192.168.101.11: error checking node availability: Unable to authenticate to 192.168.101.11 - (HTTP error: 401), try running 'pcs cluster auth'
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2017-11-24T13:50:13.928772 #21184]  INFO -- : PCSD Debugging enabledn",
    "D, [2017-11-24T13:50:13.928812 #21184] DEBUG -- : Did not detect RHEL 6n",
    "I, [2017-11-24T13:50:13.928836 #21184]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2017-11-24T13:50:13.928851 #21184]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2017-11-24T13:50:13.933421 #21184] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2017-11-24T13:50:13.933487 #21184] DEBUG -- : []n",
    "D, [2017-11-24T13:50:13.933508 #21184] DEBUG -- : Duration: 0.004558361sn",
    "I, [2017-11-24T13:50:13.933561 #21184]  INFO -- : Return Value: 0n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://192.168.101.12:2224/remote/node_available
Data: None
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}--Debug Response End--
Error: 192.168.101.12: error checking node availability: Unable to authenticate to 192.168.101.12 - (HTTP error: 401), try running 'pcs cluster auth'
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2017-11-24T13:50:14.203895 #21193]  INFO -- : PCSD Debugging enabledn",
    "D, [2017-11-24T13:50:14.203935 #21193] DEBUG -- : Did not detect RHEL 6n",
    "I, [2017-11-24T13:50:14.203958 #21193]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2017-11-24T13:50:14.203973 #21193]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2017-11-24T13:50:14.208645 #21193] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2017-11-24T13:50:14.208721 #21193] DEBUG -- : []n",
    "D, [2017-11-24T13:50:14.208743 #21193] DEBUG -- : Duration: 0.004661972sn",
    "I, [2017-11-24T13:50:14.208796 #21193]  INFO -- : Return Value: 0n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://192.168.101.13:2224/remote/node_available
Data: None
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}--Debug Response End--
Error: 192.168.101.13: error checking node availability: Unable to authenticate to 192.168.101.13 - (HTTP error: 401), try running 'pcs cluster auth'
Error: nodes availability check failed, use --force to override. WARNING: This will destroy existing cluster on the nodes.

After installing PCS on (x)Ubuntu 16.04, I’m unable to successfully run «pcs auth cluster».
I was following the documentation from http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/_enable_pcs_daemon.html

To reproduce:

install Ubuntu 16.04
set up networking
install pcs: apt-get -y install pcs
start pcsd: systemctl start pcsd
(optionally enable it on boot: systemctl enable pcsd)
set password for «hacluster»: passwd hacluster
attempt cluster authorisation (which fails): pcs cluster auth <hostname>

example:
root@uk01pvmh020:~# pcs cluster auth uk01pvmh020
Username: hacluster
Password:
Error: Unable to communicate with uk01pvmh020
root@uk01pvmh020:~#

Following the same procedure with RHEL7 actually works, so as a workaround I could eschew the use of Ubuntu for clustering services (am looking to get clvm working).

Here’s the output with «—debug» enabled:

root@uk01pvmh020:~# pcs —debug cluster auth uk01pvmh020
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
—Debug Input Start—
{}
—Debug Input End—
Return Value: 0
—Debug Output Start—
{
  «status»: «ok»,
  «data»: {
  },
  «log»: [
    «I, [2016-05-21T17:38:52.731105 #14594] INFO — : PCSD Debugging enabledn»,
    «D, [2016-05-21T17:38:52.731174 #14594] DEBUG — : Did not detect RHEL 6n»,
    «I, [2016-05-21T17:38:52.731207 #14594] INFO — : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen»,
    «I, [2016-05-21T17:38:52.731229 #14594] INFO — : CIB USER: hacluster, groups: n»,
    «D, [2016-05-21T17:38:52.740894 #14594] DEBUG — : [«totem.cluster_name (str) = debian\n»]n»,
    «D, [2016-05-21T17:38:52.741017 #14594] DEBUG — : []n»,
    «D, [2016-05-21T17:38:52.741059 #14594] DEBUG — : Duration: 0.009645066sn»,
    «I, [2016-05-21T17:38:52.741155 #14594] INFO — : Return Value: 0n»,
    «W, [2016-05-21T17:38:52.741423 #14594] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/pcsd/tokens’: No such file or directory @ rb_sysopen — /var/lib/pcsd/tokensn»,
    «E, [2016-05-21T17:38:52.741508 #14594] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»
  ]
}
—Debug Output End—

Sending HTTP Request to: https://uk01pvmh020:2224/remote/check_auth
Data: None
Response Reason: Tunnel connection failed: 403 Forbidden
Username: hacluster
Password:
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb auth
—Debug Input Start—
{«username»: «hacluster», «local»: false, «nodes»: [«uk01pvmh020»], «password»: «retsulcah», «force»: false}
—Debug Input End—
Return Value: 0
—Debug Output Start—
{
  «status»: «ok»,
  «data»: {
    «auth_responses»: {
      «uk01pvmh020»: {
        «status»: «noresponse»
      }
    },
    «sync_successful»: true,
    «sync_nodes_err»: [

    ],
    «sync_responses»: {
    }
  },
  «log»: [
    «I, [2016-05-21T17:39:00.392737 #14611] INFO — : PCSD Debugging enabledn»,
    «D, [2016-05-21T17:39:00.392806 #14611] DEBUG — : Did not detect RHEL 6n»,
    «I, [2016-05-21T17:39:00.392838 #14611] INFO — : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen»,
    «I, [2016-05-21T17:39:00.392860 #14611] INFO — : CIB USER: hacluster, groups: n»,
    «D, [2016-05-21T17:39:00.402354 #14611] DEBUG — : [«totem.cluster_name (str) = debian\n»]n»,
    «D, [2016-05-21T17:39:00.402461 #14611] DEBUG — : []n»,
    «D, [2016-05-21T17:39:00.402513 #14611] DEBUG — : Duration: 0.009475549sn»,
    «I, [2016-05-21T17:39:00.402595 #14611] INFO — : Return Value: 0n»,
    «W, [2016-05-21T17:39:00.403098 #14611] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/pcsd/tokens’: No such file or directory @ rb_sysopen — /var/lib/pcsd/tokensn»,
    «E, [2016-05-21T17:39:00.403238 #14611] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»,
    «I, [2016-05-21T17:39:00.403273 #14611] INFO — : SRWT Node: uk01pvmh020 Request: check_authn»,
    «E, [2016-05-21T17:39:00.403298 #14611] ERROR — : Unable to connect to node uk01pvmh020, no token availablen»,
    «I, [2016-05-21T17:39:00.409109 #14611] INFO — : No response from: uk01pvmh020 request: /auth, exception: 403 «Forbidden»n»
  ]
}
—Debug Output End—

Error: Unable to communicate with uk01pvmh020
root@uk01pvmh020:~#

For reference, here’s the debug output from a RHEL7/OL7 machine:

[root@uk01vort003 pcsd]# pcs —debug cluster auth uk01vort003
Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens
—Debug Input Start—
{}
—Debug Input End—

Return Value: 0
—Debug Output Start—
{
  «status»: «ok»,
  «data»: {
  },
  «log»: [
    «I, [2016-05-21T17:43:55.516093 #28868] INFO — : PCSD Debugging enabledn»,
    «D, [2016-05-21T17:43:55.516217 #28868] DEBUG — : Did not detect RHEL 6n»,
    «I, [2016-05-21T17:43:55.516290 #28868] INFO — : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen»,
    «I, [2016-05-21T17:43:55.516361 #28868] INFO — : CIB USER: hacluster, groups: n»,
    «D, [2016-05-21T17:43:55.520897 #28868] DEBUG — : []n»,
    «D, [2016-05-21T17:43:55.520995 #28868] DEBUG — : Duration: 0.004518704sn»,
    «I, [2016-05-21T17:43:55.521117 #28868] INFO — : Return Value: 1n»,
    «W, [2016-05-21T17:43:55.521284 #28868] WARN — : Cannot read config ‘corosync.conf’ from ‘/etc/corosync/corosync.conf’: No such file or directory — /etc/corosync/corosync.confn»,
    «W, [2016-05-21T17:43:55.521740 #28868] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/pcsd/tokens’: No such file or directory — /var/lib/pcsd/tokensn»,
    «E, [2016-05-21T17:43:55.521844 #28868] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»
  ]
}

—Debug Output End—

Sending HTTP Request to: https://uk01vort003:2224/remote/check_auth
Data: None
Response Reason: Tunnel connection failed: 403 Forbidden
Username: hacluster
Password:
Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb auth
—Debug Input Start—
{«username»: «hacluster», «local»: false, «nodes»: [«uk01vort003»], «password»: «retsulcah», «force»: false}
—Debug Input End—

Return Value: 0
—Debug Output Start—
{
  «status»: «ok»,
  «data»: {
    «auth_responses»: {
      «uk01vort003»: {
        «status»: «ok»,
        «token»: «0d262df4-7f1b-4687-acc2-73e1febec81d»
      }
    },
    «sync_successful»: true,
    «sync_nodes_err»: [

    ],
    «sync_responses»: {
    }
  },
  «log»: [
    «I, [2016-05-21T17:44:02.473670 #28892] INFO — : PCSD Debugging enabledn»,
    «D, [2016-05-21T17:44:02.473833 #28892] DEBUG — : Did not detect RHEL 6n»,
    «I, [2016-05-21T17:44:02.473904 #28892] INFO — : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen»,
    «I, [2016-05-21T17:44:02.473974 #28892] INFO — : CIB USER: hacluster, groups: n»,
    «D, [2016-05-21T17:44:02.480170 #28892] DEBUG — : []n»,
    «D, [2016-05-21T17:44:02.480552 #28892] DEBUG — : Duration: 0.006169278sn»,
    «I, [2016-05-21T17:44:02.480737 #28892] INFO — : Return Value: 1n»,
    «W, [2016-05-21T17:44:02.481035 #28892] WARN — : Cannot read config ‘corosync.conf’ from ‘/etc/corosync/corosync.conf’: No such file or directory — /etc/corosync/corosync.confn»,
    «W, [2016-05-21T17:44:02.481756 #28892] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/pcsd/tokens’: No such file or directory — /var/lib/pcsd/tokensn»,
    «E, [2016-05-21T17:44:02.481865 #28892] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»,
    «I, [2016-05-21T17:44:02.481918 #28892] INFO — : SRWT Node: uk01vort003 Request: check_authn»,
    «E, [2016-05-21T17:44:02.481959 #28892] ERROR — : Unable to connect to node uk01vort003, no token availablen»,
    «I, [2016-05-21T17:44:02.733483 #28892] INFO — : Running: /usr/sbin/pcs status nodes corosyncn»,
    «I, [2016-05-21T17:44:02.733897 #28892] INFO — : CIB USER: hacluster, groups: n»,
    «D, [2016-05-21T17:44:02.910480 #28892] DEBUG — : []n»,
    «D, [2016-05-21T17:44:02.910700 #28892] DEBUG — : Duration: 0.176567846sn»,
    «I, [2016-05-21T17:44:02.910799 #28892] INFO — : Return Value: 1n»,
    «W, [2016-05-21T17:44:02.911164 #28892] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/pcsd/tokens’: No such file or directory — /var/lib/pcsd/tokensn»,
    «E, [2016-05-21T17:44:02.911296 #28892] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»,
    «I, [2016-05-21T17:44:02.912068 #28892] INFO — : Saved config ‘tokens’ version 1 a71824b42061fbb2a08f42069a4285ddbb8f8040 to ‘/var/lib/pcsd/tokens’n»
  ]
}

—Debug Output End—

uk01vort003: Authorized
[root@uk01vort003 pcsd]#

Whilst investigating, I came across an issue with Ruby not using IPv4 connections, so I have tried to force IPv4 by editing /usr/share/pcsd/ssl.rb but even when I force it to use IPv4 for port 2224, it still doesn’t work.

Bug 1165803
pcs CLI should recognize and act upon «fail due to lack of authentication» state if/as suitable (e.g. for «pcs config restore»)

Summary:

pcs CLI should recognize and act upon «fail due to lack of authentication» st…

Keywords:
Status: CLOSED
ERRATA

Alias:

None

Product:

Red Hat Enterprise Linux 7

Classification:

Red Hat

Component:

pcs


Sub Component:



Version:

7.1

Hardware:

Unspecified

OS:

Unspecified

Priority:

low
Severity:

unspecified

Target Milestone:

rc

Target Release:


Assignee:

Tomas Jelinek

QA Contact:

cluster-qe@redhat.com

Docs Contact:


URL:


Whiteboard:

Depends On:


Blocks:


TreeView+

depends on /

blocked

Reported: 2014-11-19 18:08 UTC by Jan Pokorný [poki]
Modified: 2015-11-19 09:33 UTC
(History)

CC List:

5
users

(show)

Fixed In Version:

pcs-0.9.140-1.el7

Doc Type:

Bug Fix

Doc Text:

Cause:
User runs ‘pcs config restore’ command when some of the cluster nodes are not accessible by pcsd.

Consequence:
A generic error message, which is not very helpful, is printed.

Fix:
Print an explanatory error message helping the user to fix the issue.

Result:
User is informed about the cause of the error and how to fix it.

Clone Of:

Environment:

Last Closed:

2015-11-19 09:33:38 UTC

Target Upstream Version:


Attachments (Terms of Use)

proposed fix


(1.59 KB,
patch)

2015-05-05 12:45 UTC,

Tomas Jelinek

no flags Details
| Diff

View All

Add an attachment
(proposed patch, testcase, etc.)
Links

System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1024492 0
medium
CLOSED
pcs should handle full cluster config backup/restore
2021-02-22 00:41:40 UTC
Red Hat Product Errata RHSA-2015:2290 0
normal
SHIPPED_LIVE
Moderate: pcs security, bug fix, and enhancement update
2015-11-19 09:43:53 UTC

Internal Links:
1024492



Reported by: Duncan Hare <dh@synoia.com>

Date: Thu, 25 Oct 2018 00:21:02 UTC

Severity: grave

Done: Valentin Vidic <vvidic@debian.org>

Bug is archived. No further changes may be made.

Toggle useless messages


Report forwarded
to debian-bugs-dist@lists.debian.org, dh@synoia.com, Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>:
Bug#911801; Package pacemaker.
(Thu, 25 Oct 2018 00:21:04 GMT) (full text, mbox, link).


Acknowledgement sent
to Duncan Hare <dh@synoia.com>:
New Bug report received and forwarded. Copy sent to dh@synoia.com, Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>.
(Thu, 25 Oct 2018 00:21:04 GMT) (full text, mbox, link).


Message #5 received at submit@bugs.debian.org (full text, mbox, reply):

Package: pacemaker
Version: 1.1.16-1
Severity: grave
Justification: causes non-serious data loss



-- System Information:
Debian Release: 9.5
  APT prefers stable-updates
  APT policy: (500, 'stable-updates'), (500, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 4.9.0-8-amd64 (SMP w/1 CPU core)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en_US.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages pacemaker depends on:
ii  corosync                   2.4.2-3+deb9u1
ii  dbus                       1.10.26-0+deb9u1
ii  init-system-helpers        1.48
ii  libc6                      2.24-11+deb9u3
ii  libcfg6                    2.4.2-3+deb9u1
ii  libcib4                    1.1.16-1
ii  libcmap4                   2.4.2-3+deb9u1
ii  libcorosync-common4        2.4.2-3+deb9u1
ii  libcpg4                    2.4.2-3+deb9u1
ii  libcrmcluster4             1.1.16-1
ii  libcrmcommon3              1.1.16-1
ii  libcrmservice3             1.1.16-1
ii  libglib2.0-0               2.50.3-2
ii  libgnutls30                3.5.8-5+deb9u3
ii  liblrmd1                   1.1.16-1
ii  libpam0g                   1.1.8-3.6
ii  libpe-rules2               1.1.16-1
ii  libpe-status10             1.1.16-1
ii  libpengine10               1.1.16-1
ii  libqb0                     1.0.1-1
ii  libquorum5                 2.4.2-3+deb9u1
ii  libstonithd2               1.1.16-1
ii  libtransitioner2           1.1.16-1
ii  lsb-base                   9.20161125
ii  pacemaker-common           1.1.16-1
ii  pacemaker-resource-agents  1.1.16-1
ii  perl                       5.24.1-3+deb9u4

Versions of packages pacemaker recommends:
ii  fence-agents         4.0.25-1
ii  pacemaker-cli-utils  1.1.16-1

Versions of packages pacemaker suggests:
ii  cluster-glue  1.0.12-5
ii  pcs           0.9.155+dfsg-2+deb9u1


Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb auth
--Debug Input Start--
{"username": "hacluster", "local": false, "nodes": ["greene", "pinke"], "password": "Cl3nt4n@@", "force": true}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
    "auth_responses": {
      "pinke": {
        "status": "ok",
        "token": "c4dca519-18ab-4ca9-b218-af6e03b3731d"
      },
      "greene": {
        "status": "ok",
        "token": "d0ea589d-0a05-40d8-a366-82b40d557465"
      }
    },
    "sync_successful": true,
    "sync_nodes_err": [

    ],
    "sync_responses": {
    }
  },
  "log": [
    "I, [2018-10-24T17:03:01.093531 #7491]  INFO -- : PCSD Debugging enabledn",
    "D, [2018-10-24T17:03:01.093652 #7491] DEBUG -- : Did not detect RHEL 6n",
    "I, [2018-10-24T17:03:01.093733 #7491]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2018-10-24T17:03:01.093816 #7491]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2018-10-24T17:03:01.134130 #7491] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2018-10-24T17:03:01.134912 #7491] DEBUG -- : []n",
    "D, [2018-10-24T17:03:01.135169 #7491] DEBUG -- : Duration: 0.040189124sn",
    "I, [2018-10-24T17:03:01.135618 #7491]  INFO -- : Return Value: 0n",
    "I, [2018-10-24T17:03:01.430297 #7491]  INFO -- : Running: /usr/sbin/pcs status nodes corosyncn",
    "I, [2018-10-24T17:03:01.430644 #7491]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2018-10-24T17:03:03.173921 #7491] DEBUG -- : ["Corosync Nodes:\n", " Online:\n", " Offline:\n"]n",
    "D, [2018-10-24T17:03:03.174581 #7491] DEBUG -- : []n",
    "D, [2018-10-24T17:03:03.174852 #7491] DEBUG -- : Duration: 1.743259319sn",
    "I, [2018-10-24T17:03:03.175252 #7491]  INFO -- : Return Value: 0n",
    "I, [2018-10-24T17:03:03.184745 #7491]  INFO -- : Sending config 'tokens' version 1 baa843266738f0107221006d0e7dfd43ba74c73b to nodes: n"
  ]
}
--Debug Output End--


Running: /usr/sbin/corosync -v

Finished running: /usr/sbin/corosync -v
Return value: 0
--Debug Stdout Start--
Corosync Cluster Engine, version '2.4.2'
Copyright (c) 2006-2009 Red Hat, Inc.

--Debug Stdout End--
--Debug Stderr Start--

--Debug Stderr End--

Destroying cluster on nodes: pinke, greene...
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2018-10-24T17:03:36.610130 #7562]  INFO -- : PCSD Debugging enabledn",
    "D, [2018-10-24T17:03:36.610225 #7562] DEBUG -- : Did not detect RHEL 6n",
    "I, [2018-10-24T17:03:36.610297 #7562]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2018-10-24T17:03:36.610391 #7562]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2018-10-24T17:03:36.793991 #7562] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2018-10-24T17:03:36.794664 #7562] DEBUG -- : []n",
    "D, [2018-10-24T17:03:36.794907 #7562] DEBUG -- : Duration: 0.18349867sn",
    "I, [2018-10-24T17:03:36.804683 #7562]  INFO -- : Return Value: 0n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://greene:2224/remote/cluster_stop
Data: force=1&component=pacemaker
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}--Debug Response End--
greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2018-10-24T17:03:36.785802 #7561]  INFO -- : PCSD Debugging enabledn",
    "D, [2018-10-24T17:03:36.785993 #7561] DEBUG -- : Did not detect RHEL 6n",
    "I, [2018-10-24T17:03:36.786214 #7561]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2018-10-24T17:03:36.786287 #7561]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2018-10-24T17:03:37.025018 #7561] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2018-10-24T17:03:37.025680 #7561] DEBUG -- : []n",
    "D, [2018-10-24T17:03:37.025886 #7561] DEBUG -- : Duration: 0.238586259sn",
    "I, [2018-10-24T17:03:37.026396 #7561]  INFO -- : Return Value: 0n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://pinke:2224/remote/cluster_stop
Data: force=1&component=pacemaker
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}--Debug Response End--
pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2018-10-24T17:03:41.349512 #7583]  INFO -- : PCSD Debugging enabledn",
    "D, [2018-10-24T17:03:41.349694 #7583] DEBUG -- : Did not detect RHEL 6n",
    "I, [2018-10-24T17:03:41.349847 #7583]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2018-10-24T17:03:41.349948 #7583]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2018-10-24T17:03:41.419400 #7583] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2018-10-24T17:03:41.420024 #7583] DEBUG -- : []n",
    "D, [2018-10-24T17:03:41.420277 #7583] DEBUG -- : Duration: 0.069298793sn",
    "I, [2018-10-24T17:03:41.420779 #7583]  INFO -- : Return Value: 0n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://pinke:2224/remote/cluster_destroy
Data: None
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}--Debug Response End--
pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2018-10-24T17:03:41.479439 #7584]  INFO -- : PCSD Debugging enabledn",
    "D, [2018-10-24T17:03:41.479616 #7584] DEBUG -- : Did not detect RHEL 6n",
    "I, [2018-10-24T17:03:41.479781 #7584]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_namen",
    "I, [2018-10-24T17:03:41.479934 #7584]  INFO -- : CIB USER: hacluster, groups: n",
    "D, [2018-10-24T17:03:41.542663 #7584] DEBUG -- : ["totem.cluster_name (str) = debian\n"]n",
    "D, [2018-10-24T17:03:41.543482 #7584] DEBUG -- : []n",
    "D, [2018-10-24T17:03:41.543774 #7584] DEBUG -- : Duration: 0.062651059sn",
    "I, [2018-10-24T17:03:41.546667 #7584]  INFO -- : Return Value: 0n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://greene:2224/remote/cluster_destroy
Data: None
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}--Debug Response End--
greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'


-- no debconf information



Bug reassigned from package ‘pacemaker’ to ‘pcs’.
Request was from Valentin Vidic <vvidic@debian.org>
to control@bugs.debian.org.
(Thu, 25 Oct 2018 16:39:03 GMT) (full text, mbox, link).


No longer marked as found in versions pacemaker/1.1.16-1.
Request was from Valentin Vidic <vvidic@debian.org>
to control@bugs.debian.org.
(Thu, 25 Oct 2018 16:39:03 GMT) (full text, mbox, link).


Information forwarded
to debian-bugs-dist@lists.debian.org, Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>:
Bug#911801; Package pcs.
(Thu, 25 Oct 2018 16:48:02 GMT) (full text, mbox, link).


Acknowledgement sent
to Valentin Vidic <Valentin.Vidic@CARNet.hr>:
Extra info received and forwarded to list. Copy sent to Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>.
(Thu, 25 Oct 2018 16:48:02 GMT) (full text, mbox, link).


Message #14 received at 911801@bugs.debian.org (full text, mbox, reply):

On Wed, Oct 24, 2018 at 05:19:02PM -0700, Duncan Hare wrote:
> Package: pacemaker
> Version: 1.1.16-1
> Severity: grave
> Justification: causes non-serious data loss

I've reassigned this to pcs package, since it probably doesn't have to
do with pacemaker, but I'm not sure what is going on here. Can you
provide some more info on the problem and pcs commands that where used
so I can try to reproduce?

Also, perhaps the README.Debian included in the pcs package could help
if this is an initial installation of the cluster.

-- 
Valentin



Information forwarded
to debian-bugs-dist@lists.debian.org, Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>:
Bug#911801; Package pcs.
(Thu, 25 Oct 2018 17:18:03 GMT) (full text, mbox, link).


Acknowledgement sent
to Duncan Hare <dh@synoia.com>:
Extra info received and forwarded to list. Copy sent to Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>.
(Thu, 25 Oct 2018 17:18:03 GMT) (full text, mbox, link).


Message #19 received at 911801@bugs.debian.org (full text, mbox, reply):

[Message part 1 (text/plain, inline)]
root@greene:/home/duncan# pcs cluster setup --name pacemaker1 pinke greene
greene: Authorized
pinke: Authorizedroot@greene:/home/duncan#root@greene:/home/duncan# pcs cluster setup --name pacemaker1 pinke greene --force
Destroying cluster on nodes: pinke, greene...
pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'
pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'
Error: unable to destroy cluster
greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'
pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
root@greene:/home/duncan#

this works: rm /etc/corosync/corosync.conf

Debian Bug report logs - #847295
pcs cluster setup does not overwrite existing config files, and the n the cluster create fails.
Thanks

Duncan Hare

714 931 7952

      From: Valentin Vidic <Valentin.Vidic@CARNet.hr>
 To: Duncan Hare <dh@synoia.com> 
Cc: 911801@bugs.debian.org
 Sent: Thursday, October 25, 2018 9:44 AM
 Subject: Re: [Debian-ha-maintainers] Bug#911801: pacemaker: Cannot complete pcs cluster setup command, returns error HTTP401
   
On Wed, Oct 24, 2018 at 05:19:02PM -0700, Duncan Hare wrote:
> Package: pacemaker
> Version: 1.1.16-1
> Severity: grave
> Justification: causes non-serious data loss

I've reassigned this to pcs package, since it probably doesn't have to
do with pacemaker, but I'm not sure what is going on here. Can you
provide some more info on the problem and pcs commands that where used
so I can try to reproduce?

Also, perhaps the README.Debian included in the pcs package could help
if this is an initial installation of the cluster.

-- 
Valentin


   
[Message part 2 (text/html, inline)]

Information forwarded
to debian-bugs-dist@lists.debian.org, Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>:
Bug#911801; Package pcs.
(Thu, 25 Oct 2018 17:24:06 GMT) (full text, mbox, link).


Acknowledgement sent
to 911801@bugs.debian.org:
Extra info received and forwarded to list. Copy sent to Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>.
(Thu, 25 Oct 2018 17:24:06 GMT) (full text, mbox, link).


Message #24 received at 911801@bugs.debian.org (full text, mbox, reply):

On Thu, Oct 25, 2018 at 05:11:17PM +0000, Duncan Hare wrote:
> root@greene:/home/duncan# pcs cluster setup --name pacemaker1 pinke greene
> greene: Authorized
> pinke: Authorizedroot@greene:/home/duncan#root@greene:/home/duncan# pcs cluster setup --name pacemaker1 pinke greene --force
> Destroying cluster on nodes: pinke, greene...
> pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
> greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'
> pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
> greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'
> Error: unable to destroy cluster
> greene: Unable to authenticate to greene - (HTTP error: 401), try running 'pcs cluster auth'
> pinke: Unable to authenticate to pinke - (HTTP error: 401), try running 'pcs cluster auth'
> root@greene:/home/duncan#
> 
> this works: rm /etc/corosync/corosync.conf
> 
> Debian Bug report logs - #847295
> pcs cluster setup does not overwrite existing config files, and the n the cluster create fails.

Yes, I think removing corosync.conf is documented in README.Debian:

As PCS expects Corosync and Pacemaker to be in unconfigured state,
the following command needs to be executed on all cluster nodes to
stop the services and delete their default configuration:

  # pcs cluster destroy
  Shutting down pacemaker/corosync services...
  Killing any remaining services...
  Removing all cluster configuration files...

-- 
Valentin



Marked Bug as done
Request was from Valentin Vidic <vvidic@debian.org>
to control@bugs.debian.org.
(Mon, 12 Nov 2018 21:39:12 GMT) (full text, mbox, link).


Notification sent
to Duncan Hare <dh@synoia.com>:
Bug acknowledged by developer.
(Mon, 12 Nov 2018 21:39:13 GMT) (full text, mbox, link).


Message sent on
to Duncan Hare <dh@synoia.com>:
Bug#911801.
(Mon, 12 Nov 2018 21:39:24 GMT) (full text, mbox, link).


Message #31 received at 911801-submitter@bugs.debian.org (full text, mbox, reply):

close 911801 
thanks

Trying to run the setup command always returns 401:

# pcs cluster setup --name pacemaker1 stretch1 stretch2        
Error: stretch1: unable to authenticate to node
Error: stretch2: unable to authenticate to node
Error: nodes availability check failed, use --force to override. WARNING: This will destroy existing cluster on the nodes.

# pcs cluster setup --name pacemaker1 stretch1 stretch2 --force
Destroying cluster on nodes: stretch1, stretch2...
stretch1: Unable to authenticate to stretch1 - (HTTP error: 401), try running 'pcs cluster auth'
stretch2: Unable to authenticate to stretch2 - (HTTP error: 401), try running 'pcs cluster auth'
stretch2: Unable to authenticate to stretch2 - (HTTP error: 401), try running 'pcs cluster auth'
stretch1: Unable to authenticate to stretch1 - (HTTP error: 401), try running 'pcs cluster auth'
Error: unable to destroy cluster
stretch2: Unable to authenticate to stretch2 - (HTTP error: 401), try running 'pcs cluster auth'
stretch1: Unable to authenticate to stretch1 - (HTTP error: 401), try running 'pcs cluster auth'

Also, even when running with --force, no file gets removed and I see no
reason for severity grave and justification "causes non-serious data loss".

Instructions in README.Debian should still work, so please try to use
those for setting up pcs clusters.

-- 
Valentin




Bug archived.
Request was from Debbugs Internal Request <owner@bugs.debian.org>
to internal_control@bugs.debian.org.
(Tue, 11 Dec 2018 07:25:05 GMT) (full text, mbox, link).


Send a report that this bug log contains spam.


Debian bug tracking system administrator <owner@bugs.debian.org>.
Last modified:
Thu Feb 9 19:49:42 2023;
Machine Name:
buxtehude

Debian Bug tracking system

Debbugs is free software and licensed under the terms of the GNU
Public License version 2. The current version can be obtained
from https://bugs.debian.org/debbugs-source/.

Copyright © 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson,
2005-2017 Don Armstrong, and many other contributors.

I’m trying to configure pacemaker cluster with two hosts, I’m using two centos 7(CentOS Linux release 7.2.1511 (Core)) virtual machines.

What I did so far:
I installed packages:

yum install pacemaker corosync haproxy pcs fence-agents-all

Set password for user hacluster on both servers.
Edit /etc/hosts on both machines

10.0.0.14 vm_haproxy1
10.0.0.15 vm_haproxy2

After that enabled services on both servers

systemctl enable pcsd.service pacemaker.service corosync.service haproxy.service

And started pcsd (on both servers)

systemctl start pcsd.service

Service is running on both, I can telnet from one to another on port 2224

telnet vm_haproxy1 2224
Trying 10.0.0.14...
Connected to vm_haproxy1.
Escape character is '^]'.

Output from the netstat:

[root@vm_haproxy2 ~]# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      849/sshd            
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      946/master          
tcp6       0      0 :::2224                 :::*                    LISTEN      1949/ruby           
tcp6       0      0 :::22                   :::*                    LISTEN      849/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      946/master          
udp        0      0 127.0.0.1:323           0.0.0.0:*                           619/chronyd         
udp6       0      0 ::1:323                 :::*                                619/chronyd

Pcsd is binding to ipv6 but like I already said telnet works
Selinux and firewalld are disabled.

Then I tried to authorise hosts with

pcs cluster auth vm_haproxy1 vm_haproxy2

but I got an error:

 
pcs cluster auth vm_haproxy1 vm_haproxy2
Username: hacluster
Password: 
Error: Unable to communicate with vm_haproxy1
Error: Unable to communicate with vm_haproxy2

I didn’t find any help on google. Maybe there is some one who had already solved that problem with pcs authorisation.

Содержание

  1. CentOS
  2. [SOLVED] Cluster: unable to make pcsd works
  3. [SOLVED] Cluster: unable to make pcsd works
  4. Re: Cluster: unable to make pcsd works
  5. [SOLVED] Cluster: unable to make pcsd works
  6. pcs cluster auth unable to connect
  7. Bug Description
  8. Unable to communicate with pacemaker host while authorising
  9. 10 Answers 10
  10. ruby-mri only binding to IPV6 interfaces #51
  11. Comments
  12. pcs cluster auth mynode1 —debug

CentOS

The Community ENTerprise Operating System

[SOLVED] Cluster: unable to make pcsd works

[SOLVED] Cluster: unable to make pcsd works

Post by rpelissi » 2017/07/26 20:04:26

And then the 2 nodes are activated with no issue. I am able to access https://node1:2224 and connect.

Now in a system in prod using same version of centos, same rpm from the lab I can’t make it works.

— the command pcs cluster auth -u hacluster -p password results with:

and saw some missing gem so I have installed them.

I have also unset the http_proxy and https_proxy

If I try to access https`:// :2224 after the login I have Internal server error message.

It should be something very simple but I can’t find what is not working.. any help will be greatly appreciated.

Re: Cluster: unable to make pcsd works

Post by hunter86_bg » 2017/07/27 11:48:51

[SOLVED] Cluster: unable to make pcsd works

Post by rpelissi » 2017/07/27 13:20:52

So I have tried to uninstall all gem but the one that are present in the lab and. everything was working after that! woohoo!

So it appears that the ruby pcsd daemon try to use/load first the ruby lib/modules that are present on the system before using it’s own lib or something like that.

Maybe I will take the time to send this information the the centos team if they can do something about it.

Источник

pcs cluster auth unable to connect

Affects Status Importance Assigned to Milestone
pcs (Ubuntu)

Bug Description

After installing PCS on (x)Ubuntu 16.04, I’m unable to successfully run «pcs auth cluster».
I was following the documentation from http:// clusterlabs. org/doc/ en-US/Pacemaker /1.1-pcs/ html/Clusters_ from_Scratch/ _enable_ pcs_daemon. html

install Ubuntu 16.04
set up networking
install pcs: apt-get -y install pcs
start pcsd: systemctl start pcsd
(optionally enable it on boot: systemctl enable pcsd)
set password for «hacluster»: passwd hacluster
attempt cluster authorisation (which fails): pcs cluster auth

# pcs cluster auth uk01pvmh020
Username: hacluster
Password:
Error: Unable to communicate with uk01pvmh020
root@uk01pvmh020:

Following the same procedure with RHEL7 actually works, so as a workaround I could eschew the use of Ubuntu for clustering services (am looking to get clvm working).

Here’s the output with «—debug» enabled:

# pcs —debug cluster auth uk01pvmh020
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/ pcsd/pcsd- cli.rb read_tokens
—Debug Input Start—
<>
—Debug Input End—
Return Value: 0
—Debug Output Start—
<
«status»: «ok»,
«data»: <
>,
«log»: [
«I, [2016-05- 21T17:38: 52.731105 #14594] INFO — : PCSD Debugging enabledn»,
«D, [2016-05- 21T17:38: 52.731174 #14594] DEBUG — : Did not detect RHEL 6n»,
«I, [2016-05- 21T17:38: 52.731207 #14594] INFO — : Running: /usr/sbin/ corosync- cmapctl totem.cluster_ namen» ,
«I, [2016-05- 21T17:38: 52.731229 #14594] INFO — : CIB USER: hacluster, groups: n»,
«D, [2016-05- 21T17:38: 52.740894 #14594] DEBUG — : [»totem. cluster_ name (str) = debian\n»]n»,
«D, [2016-05- 21T17:38: 52.741017 #14594] DEBUG — : []n»,
«D, [2016-05- 21T17:38: 52.741059 #14594] DEBUG — : Duration: 0.009645066sn»,
«I, [2016-05- 21T17:38: 52.741155 #14594] INFO — : Return Value: 0n»,
«W, [2016-05- 21T17:38: 52.741423 #14594] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/ pcsd/tokens’ : No such file or directory @ rb_sysopen — /var/lib/ pcsd/tokens n»,
«E, [2016-05- 21T17:38: 52.741508 #14594] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»
]
>
—Debug Output End—

Sending HTTP Request to: https:/ /uk01pvmh020: 2224/remote/ check_auth
Data: None
Response Reason: Tunnel connection failed: 403 Forbidden
Username: hacluster
Password:
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/ pcsd/pcsd- cli.rb auth
—Debug Input Start—
<«username»: «hacluster», «local»: false, «nodes»: [«uk01pvmh020»], «password»: «retsulcah», «force»: false>
—Debug Input End—
Return Value: 0
—Debug Output Start—
<
«status»: «ok»,
«data»: <
«auth_ responses» : <
» uk01pvmh020″ : <
«status»: «noresponse»
>
>,
«sync_ successful» : true,
«sync_ nodes_err» : [

],
«sync_ responses» : <
>
>,
«log»: [
«I, [2016-05- 21T17:39: 00.392737 #14611] INFO — : PCSD Debugging enabledn»,
«D, [2016-05- 21T17:39: 00.392806 #14611] DEBUG — : Did not detect RHEL 6n»,
«I, [2016-05- 21T17:39: 00.392838 #14611] INFO — : Running: /usr/sbin/ corosync- cmapctl totem.cluster_ namen» ,
«I, [2016-05- 21T17:39: 00.392860 #14611] INFO — : CIB USER: hacluster, groups: n»,
«D, [2016-05- 21T17:39: 00.402354 #14611] DEBUG — : [»totem. cluster_ name (str) = debian\n»]n»,
«D, [2016-05- 21T17:39: 00.402461 #14611] DEBUG — : []n»,
«D, [2016-05- 21T17:39: 00.402513 #14611] DEBUG — : Duration: 0.009475549sn»,
«I, [2016-05- 21T17:39: 00.402595 #14611] INFO — : Return Value: 0n»,
«W, [2016-05- 21T17:39: 00.403098 #14611] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/ pcsd/tokens’ : No such file or directory @ rb_sysopen — /var/lib/ pcsd/tokens n»,
«E, [2016-05- 21T17:39: 00.403238 #14611] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»,
«I, [2016-05- 21T17:39: 00.403273 #14611] INFO — : SRWT Node: uk01pvmh020 Request: check_authn»,
«E, [2016-05- 21T17:39: 00.403298 #14611] ERROR — : Unable to connect to node uk01pvmh020, no token availablen»,
«I, [2016-05- 21T17:39: 00.409109 #14611] INFO — : No response from: uk01pvmh020 request: /auth, exception: 403 »Forbidden»n»
]
>
—Debug Output End—

Error: Unable to communicate with uk01pvmh020
root@uk01pvmh020:

For reference, here’s the debug output from a RHEL7/OL7 machine:

[root@uk01vort003 pcsd]# pcs —debug cluster auth uk01vort003
Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/ pcsd/pcsd- cli.rb read_tokens
—Debug Input Start—
<>
—Debug Input End—

Return Value: 0
—Debug Output Start—
<
«status»: «ok»,
«data»: <
>,
«log»: [
«I, [2016-05- 21T17:43: 55.516093 #28868] INFO — : PCSD Debugging enabledn»,
«D, [2016-05- 21T17:43: 55.516217 #28868] DEBUG — : Did not detect RHEL 6n»,
«I, [2016-05- 21T17:43: 55.516290 #28868] INFO — : Running: /usr/sbin/ corosync- cmapctl totem.cluster_ namen» ,
«I, [2016-05- 21T17:43: 55.516361 #28868] INFO — : CIB USER: hacluster, groups: n»,
«D, [2016-05- 21T17:43: 55.520897 #28868] DEBUG — : []n»,
«D, [2016-05- 21T17:43: 55.520995 #28868] DEBUG — : Duration: 0.004518704sn»,
«I, [2016-05- 21T17:43: 55.521117 #28868] INFO — : Return Value: 1n»,
«W, [2016-05- 21T17:43: 55.521284 #28868] WARN — : Cannot read config ‘corosync.conf’ from ‘/etc/corosync/ corosync. conf’: No such file or directory — /etc/corosync/ corosync. confn» ,
«W, [2016-05- 21T17:43: 55.521740 #28868] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/ pcsd/tokens’ : No such file or directory — /var/lib/ pcsd/tokens n»,
«E, [2016-05- 21T17:43: 55.521844 #28868] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»
]
>

—Debug Output End—

Sending HTTP Request to: https:/ /uk01vort003: 2224/remote/ check_auth
Data: None
Response Reason: Tunnel connection failed: 403 Forbidden
Username: hacluster
Password:
Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/ pcsd/pcsd- cli.rb auth
—Debug Input Start—
<«username»: «hacluster», «local»: false, «nodes»: [«uk01vort003»], «password»: «retsulcah», «force»: false>
—Debug Input End—

Return Value: 0
—Debug Output Start—
<
«status»: «ok»,
«data»: <
«auth_ responses» : <
» uk01vort003″ : <
«status»: «ok»,
«token»: «0d262df4- 7f1b-4687- acc2-73e1febec8 1d»
>
>,
«sync_ successful» : true,
«sync_ nodes_err» : [

],
«sync_ responses» : <
>
>,
«log»: [
«I, [2016-05- 21T17:44: 02.473670 #28892] INFO — : PCSD Debugging enabledn»,
«D, [2016-05- 21T17:44: 02.473833 #28892] DEBUG — : Did not detect RHEL 6n»,
«I, [2016-05- 21T17:44: 02.473904 #28892] INFO — : Running: /usr/sbin/ corosync- cmapctl totem.cluster_ namen» ,
«I, [2016-05- 21T17:44: 02.473974 #28892] INFO — : CIB USER: hacluster, groups: n»,
«D, [2016-05- 21T17:44: 02.480170 #28892] DEBUG — : []n»,
«D, [2016-05- 21T17:44: 02.480552 #28892] DEBUG — : Duration: 0.006169278sn»,
«I, [2016-05- 21T17:44: 02.480737 #28892] INFO — : Return Value: 1n»,
«W, [2016-05- 21T17:44: 02.481035 #28892] WARN — : Cannot read config ‘corosync.conf’ from ‘/etc/corosync/ corosync. conf’: No such file or directory — /etc/corosync/ corosync. confn» ,
«W, [2016-05- 21T17:44: 02.481756 #28892] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/ pcsd/tokens’ : No such file or directory — /var/lib/ pcsd/tokens n»,
«E, [2016-05- 21T17:44: 02.481865 #28892] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»,
«I, [2016-05- 21T17:44: 02.481918 #28892] INFO — : SRWT Node: uk01vort003 Request: check_authn»,
«E, [2016-05- 21T17:44: 02.481959 #28892] ERROR — : Unable to connect to node uk01vort003, no token availablen»,
«I, [2016-05- 21T17:44: 02.733483 #28892] INFO — : Running: /usr/sbin/pcs status nodes corosyncn»,
«I, [2016-05- 21T17:44: 02.733897 #28892] INFO — : CIB USER: hacluster, groups: n»,
«D, [2016-05- 21T17:44: 02.910480 #28892] DEBUG — : []n»,
«D, [2016-05- 21T17:44: 02.910700 #28892] DEBUG — : Duration: 0.176567846sn»,
«I, [2016-05- 21T17:44: 02.910799 #28892] INFO — : Return Value: 1n»,
«W, [2016-05- 21T17:44: 02.911164 #28892] WARN — : Cannot read config ‘tokens’ from ‘/var/lib/ pcsd/tokens’ : No such file or directory — /var/lib/ pcsd/tokens n»,
«E, [2016-05- 21T17:44: 02.911296 #28892] ERROR — : Unable to parse tokens file: A JSON text must at least contain two octets!n»,
«I, [2016-05- 21T17:44: 02.912068 #28892] INFO — : Saved config ‘tokens’ version 1 a71824b42061fbb 2a08f42069a4285 ddbb8f8040 to ‘/var/lib/ pcsd/tokens’ n»
]
>

—Debug Output End—

uk01vort003: Authorized
[root@uk01vort003 pcsd]#

Whilst investigating, I came across an issue with Ruby not using IPv4 connections, so I have tried to force IPv4 by editing /usr/share/ pcsd/ssl. rb but even when I force it to use IPv4 for port 2224, it still doesn’t work.

Источник

Unable to communicate with pacemaker host while authorising

I’m trying to configure pacemaker cluster with two hosts, I’m using two centos 7(CentOS Linux release 7.2.1511 (Core)) virtual machines.

What I did so far: I installed packages:

Set password for user hacluster on both servers. Edit /etc/hosts on both machines

After that enabled services on both servers

And started pcsd (on both servers)

Service is running on both, I can telnet from one to another on port 2224

Output from the netstat:

Pcsd is binding to ipv6 but like I already said telnet works Selinux and firewalld are disabled.

Then I tried to authorise hosts with

but I got an error:

I didn’t find any help on google. Maybe there is some one who had already solved that problem with pcs authorisation.

10 Answers 10

I typically don’t use pcsd in my clusters, but the times that I have, I recall setting up the authentication after I had a working cluster; function first, frill later.

I would try using the following commands from both nodes to create the corosync configuration and start the cluster, before setting up your authentication:

Once you see your nodes online in the output of crm_mon , try your steps for setting up the node authentication.

I had this issue. In my case I tracked it down to the environment variables for proxies being set.

To check this: first run the command with —debug enabled:

In the debug output I could see:

Running curl towards the URL it was trying to use gave more explicit information:

Unsetting http_proxy and https_proxy enabled it to work.

If you don’t have too many nodes, you can list them in the no_proxy environment variable. Otherwise it’s a bit painful — you need to ensure you always run the pcs commands with no proxy environment variables set. I have so far not found any way to configured pcs not to use these variables.

You could write a little wrapper that unsets the variables and then calls pcs.

Источник

ruby-mri only binding to IPV6 interfaces #51

If you try to ‘auth’ to IPv4 address you can’t, but IPv6 addresses are fine. See below.

Possible Fix

After making the following change I can now bind via either IPv4 or IPv6 addresses.

The text was updated successfully, but these errors were encountered:

We need to take a closer look at this issue. For me bind address :: works just fine whereas * makes pcsd inaccessible on IPv6.

pcs cluster auth 127.0.0.1 works
pcs cluster auth ::1 works

pcs cluster auth 127.0.0.1 works
pcs cluster auth ::1 doesn’t work

pcs cluster auth 127.0.0.1 works
pcs cluster auth ::1 works

pcs cluster auth 127.0.0.1 works
pcs cluster auth ::1 doesn’t work

I’ve been doing some research on this, and I’m not 100% sure what’s going on, but it looks like different version of ruby have different behavior.

When I’m using ruby-2.1.5, using a BindAddress of ‘::’ only listens on IPv6, but a BindAddress of nil listens on IPv6 & IPv4. But when I use ruby 1.8.7, I don’t get the same behavior.

Technically, specifying ‘::’ should allow binding of both IPv4 & IPv6, but it appears there is a bug in Webrick that we need to work around.

I’ve committed code that will set the BindAddress to nil, but that will only work with a newer ruby. @dannysheehan what version of ruby are you running?

I was using ruby 2.1.5 also.

I managed to get this working correctly on my rawhide test cluster (ruby 2.2.1) by working around a webrick handler bug in rack, as it seems to have been broken by rack/rack@5a9169d

For what it’s worth I’ve opened an issue report about it rack/rack#833 and there’s another issue linked from it regarding the BindAddress option getting clobbered.

Hello,
on fedora 22 if i set

ruby-mri doesn’t bind to ips address

Thanks for you’re help
guidtz

Hello guidtz,
an updated pcs package, which contains a fix for this bug, is currently in the queue to be pushed into Fedora 22 repository.

Thanks Tom, in how many days you think it will be disponible ?

I think it should be available in testing repository in two days and it should take about another week for it to be available in stable. https://admin.fedoraproject.org/updates/pcs-0.9.139-5.fc22

Ok I’ll continue my tests with the source version.

So . I install the git version and now it’s listen on 0.0.0.0

But with this command I have Error: Unable to communicate with pcsd

pcs cluster auth fed-node01 fed-node02

pcs status
Cluster name: cluster_guidtz
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Tue Jun 9 10:50:19 2015
Last change: Tue Jun 9 10:31:28 2015
Stack: corosync
Current DC: fed-node02 (2) — partition with quorum
Version: 1.1.12-a9c8177
2 Nodes configured
0 Resources configured

Online: [ fed-node01 fed-node02 ]

Full list of resources:

PCSD Status:
fed-node01: Unable to authenticate
fed-node02: Unable to authenticate

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

Can you try running ‘pcs cluster auth’. My guess is that you’re running it from source so it’s not using the system «auth» directory.

Any updates on this? I am using CentOS 7.1 and after installing pacemaker/pcs and starting the service, it’s only serving up on tcp6 .

guidtz: Can you try ‘pcs cluster auth fed-node01 fed-node02 —debug’? Do you have any proxy set?

johnjelinek: Did you try to connect to pcsd using ipv4 or you just took a look at netstat? For me it works OK on 7.1 as I described in part 1 of my first comment in this thread. What pcs version do you have? Is it a pcs CentOS package or upstream pcs?

I have the same problem — pcs Centos package version 0.9.137-13.el7_1.3. And listen only localhost:
netstat -tpnl |grep 2224
tcp 0 0 127.0.0.1:2224 0.0.0.0:* NASLOUCHÁ 22908/ruby
tcp6 0 0 ::1:2224 . * NASLOUCHÁ 22908/ruby

when i change ssl.rb:
bind to nil
and add to ssl.rb
host => «nil»,

everything is ok

@spravcesite 👍
I’m running Debian Jessie with ruby 2.1.5p273

I’m running into the same issue but the above fixes do not work. I can get the web UI to render but I’m unable to authenticate cluster members.

Hi, I have a similar issue:

pcs cluster auth mynode1 —debug

Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens
—Debug Input Start—
<>
—Debug Input End—

Return Value: 1
—Debug Output Start—
/usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in require’: cannot load such file — json (LoadError) from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in require’
from /usr/lib/pcsd/pcsd-cli.rb:5:in `

—Debug Output End—

Return Value: 1
—Debug Output Start—
/usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in require’: cannot load such file — json (LoadError) from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in require’
from /usr/lib/pcsd/pcsd-cli.rb:5:in `

—Debug Output End—

Error: Unable to communicate with pcsd

but I have another cluster with rhel7 where the process use only ipv6 and all works well. I installed pcs from repository, but it seems there is also some problem with gem. why?
Someone can give me some advises?
Thank you

@howdoicomputer Can you try running pcs cluster auth nfs1 nfs2 —debug and post the output? Be aware your password is contained in the output so you most probably want to replace it.

@ntt1985 So pcsd is running just fine, yet pcsd-cli.rb is unable to load rubygem-json. I’m not able to reproduce this on a freshly installed RHEL7 host. Do you have any custom ruby setup or settings, e.g. multiple ruby versions or something like that? Rubygem-json should be installed in /usr/share/gems . Can you try running GEM_PATH=/usr/share/gems pcs cluster auth mynode1 —debug ?

@tomjelinek you are right. The problem was related to multiple version of ruby installed with rvm on the server. After deleting all ruby versions, uninstalled rvm and reinstalled pcs all works well. Thank you.
PS: actually I have the port 2224 used by ipv6 (or at least, netstat -tulnp show me that), but all works well also with ipv4 on a centos7 cluster.

I have the same problem (binding to IPv6 only) on Debian with ruby version 2.1 and 2.3. The problem seems to be in the change to WEBrick::Utils::create_listeners introduced in ruby 2.1:

A small example program calling this function can prove this:

This is the strace output for CentOS7 with ruby 2.0:

On ruby 2.1 and newer the same program executes this:

The problem is caused by the IPV6_V6ONLY socket option being set:

On ruby 2.1 using nil in place of :: has the same effect so perhaps this ruby version check can be added to the bind code in pcsd?

We’ll take a look at this. Thanks @vvidic !

It seems rack handler for webrick does a little rewriting of this parameter:

Because of this nil gets replaced with 0.0.0.0 and we get IPv4 only. For some reason this only happens when RACK_ENV=production is set for the ruby process.

But luckily getaddrinfo in libc will take * instead of nil/NULL :

and the following fix works with rack+webrick combination too:

@vvidic Thank you very much for the time and effort you spent on debugging this issue!

Here is the resulting patch 13cfbab. User is still able to define bind addresses without pcsd changing them based on ruby version.

I’m closing this issue. I believe the root cause has been fixed by the patch above. Furthermore there is a possibility to set bind addresses manually (see #77) to workaround this issue.

Having hit this issue in my own projects, and since this issue helped me out in fixing that, I’ll add my own caveat to what @vvidic said up-thread. Specifically:

But luckily getaddrinfo in libc will take *

is only true for glibc. As far as I can tell, there’s no requirement that getaddrinfo (3) (which is the underlying function which does the mapping of * to 0.0.0.0 and :: ) handle * in the manner that glibc does. I know for a fact that musl libc doesn’t handle * the same way.

Источник

Когда автор недавно настраивает PACTEMAKER + COROSYNC, попробуйте изменить адрес Bindnetaddr в файле Corosync.Conf (хочу изменить на 10.0.0.0), но после выполнения просмотра:

corosync-cfgtool -s

Конфигурация обнаружения не вступила в силу, и сердцебиение по-прежнему выбирает исходный IP-адрес (192.168.122.0) для выполнения обнаружения сердцебиения.

Просмотр связанных журналов нет очевидной ошибки:

[[email protected] ~] tailf -n 100 /var/log/cluster/corosync.log
...
[2289] node2 corosyncnotice  [TOTEM ] A new membership (192.168.122.60:316) was formed. Members joined: 1
[2289] node2 corosyncnotice  [QUORUM] Members[3]: 1 3 2
[2289] node2 corosyncnotice  [MAIN  ] Completed service synchronization, ready to provide service.
[2289] node2 corosyncnotice  [MAIN  ] Node was shut down by a signal
[2289] node2 corosyncnotice  [SERV  ] Unloading all Corosync service engines.
[2289] node2 corosyncinfo    [QB    ] withdrawing server sockets
[2289] node2 corosyncnotice  [SERV  ] Service engine unloaded: corosync vote quorum service v1.0
[2289] node2 corosyncinfo    [QB    ] withdrawing server sockets
[2289] node2 corosyncnotice  [SERV  ] Service engine unloaded: corosync configuration map access
[2289] node2 corosyncinfo    [QB    ] withdrawing server sockets
[2289] node2 corosyncnotice  [SERV  ] Service engine unloaded: corosync configuration service
[2289] node2 corosyncinfo    [QB    ] withdrawing server sockets
[2289] node2 corosyncnotice  [SERV  ] Service engine unloaded: corosync cluster closed process group service v1.01
[2289] node2 corosyncinfo    [QB    ] withdrawing server sockets
[2289] node2 corosyncnotice  [SERV  ] Service engine unloaded: corosync cluster quorum service v0.1
[2289] node2 corosyncnotice  [SERV  ] Service engine unloaded: corosync profile loading service
[2289] node2 corosyncnotice  [MAIN  ] Corosync Cluster Engine exiting normally
[18475] node2 corosyncnotice  [MAIN  ] Corosync Cluster Engine ('2.4.3'): started and ready to provide service.
[18475] node2 corosyncinfo    [MAIN  ] Corosync built-in features: dbus systemd xmlconf qdevices qnetd snmp libcgroup pie relro bindnow
[18475] node2 corosyncwarning [MAIN  ] interface section bindnetaddr is used together with nodelist. Nodelist one is going to be used.
[18475] node2 corosyncwarning [MAIN  ] Please migrate config file to nodelist.
[18475] node2 corosyncnotice  [TOTEM ] Initializing transport (UDP/IP Unicast).
[18475] node2 corosyncnotice  [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
[18475] node2 corosyncnotice  [TOTEM ] The network interface [192.168.122.117] is now up.
[18475] node2 corosyncnotice  [SERV  ] Service engine loaded: corosync configuration map access [0]
[18475] node2 corosyncinfo    [QB    ] server name: cmap
[18475] node2 corosyncnotice  [SERV  ] Service engine loaded: corosync configuration service [1]
[18475] node2 corosyncinfo    [QB    ] server name: cfg
[18475] node2 corosyncnotice  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
[18475] node2 corosyncinfo    [QB    ] server name: cpg
[18475] node2 corosyncnotice  [SERV  ] Service engine loaded: corosync profile loading service [4]
[18475] node2 corosyncnotice  [QUORUM] Using quorum provider corosync_votequorum
[18475] node2 corosyncnotice  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
[18475] node2 corosyncinfo    [QB    ] server name: votequorum
[18475] node2 corosyncnotice  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
[18475] node2 corosyncinfo    [QB    ] server name: quorum
[18475] node2 corosyncnotice  [TOTEM ] adding new UDPU member {192.168.122.60}
[18475] node2 corosyncnotice  [TOTEM ] adding new UDPU member {192.168.122.117}
[18475] node2 corosyncnotice  [TOTEM ] adding new UDPU member {192.168.122.114}
[18475] node2 corosyncnotice  [TOTEM ] A new membership (192.168.122.117:320) was formed. Members joined: 2
[18475] node2 corosyncnotice  [TOTEM ] A new membership (192.168.122.60:324) was formed. Members joined: 1
[18475] node2 corosyncnotice  [QUORUM] This node is within the primary component and will provide service.
[18475] node2 corosyncnotice  [QUORUM] Members[2]: 1 2
[18475] node2 corosyncnotice  [MAIN  ] Completed service synchronization, ready to provide service.
[18475] node2 corosyncnotice  [TOTEM ] A new membership (192.168.122.60:328) was formed. Members joined: 3
[18475] node2 corosyncwarning [CPG   ] downlist left_list: 0 received in state 0
[18475] node2 corosyncnotice  [QUORUM] Members[3]: 1 3 2
[18475] node2 corosyncnotice  [MAIN  ] Completed service synchronization, ready to provide service.
...

Сравнивая файл конфигурации, нет ошибки правописания:

[[email protected] ~]# egrep -v '(#|^$)' /etc/corosync/corosync.conf.example
totem {
	version: 2
	crypto_cipher: none
	crypto_hash: none
	interface {
		ringnumber: 0
		bindnetaddr: 192.168.1.0
		mcastaddr: 239.255.1.1
		mcastport: 5405
		ttl: 1
	}
}
logging {
	fileline: off
	to_stderr: no
	to_logfile: yes
	logfile: /var/log/cluster/corosync.log
	to_syslog: yes
	debug: off
	timestamp: on
	logger_subsys {
		subsys: QUORUM
		debug: off
	}
}
quorum {
}

После проверки сети файл хостов (все добавлять связанный узел IP), проблем нет. Когда вы боретесь, попробуйте изменить адрес в файле odelist:

[[email protected] corosync]# vim corosync.conf


    node {
        ring0_addr: 10.0.0.20
        nodeid: 2
    }

    node {
        ring0_addr: 10.0.0.30
        nodeid: 3
    }
}

quorum {
    provider: corosync_votequorum
}

logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
}

Запустите ошибку синхронизации кластеров PCS:

[[email protected] corosync]# pcs cluster sync
10.0.0.10: {"notauthorized":"true"}
Unable to authenticate to 10.0.0.10 - (HTTP error: 401), try running 'pcs cluster auth'
Error: Unable to set corosync config: Unable to authenticate to 10.0.0.10 - (HTTP error: 401), try running 'pcs cluster auth'

Причина найдена, оригинальный новый IP-адрес не делал PCS Auth, выполняет команды и просматривать, решение проблем

[[email protected] corosync]# pcs cluster auth 10.0.0.10 10.0.0.20 10.0.0.30
Username: hacluster
Password: 
10.0.0.30: Authorized
10.0.0.20: Authorized
10.0.0.10: Authorized
[[email protected] corosync]# pcs cluster sync
10.0.0.10: Succeeded
10.0.0.20: Succeeded
10.0.0.30: Succeeded
[[email protected] corosync]# pcs cluster start --all
10.0.0.10: Starting Cluster (corosync)...
10.0.0.20: Starting Cluster (corosync)...
10.0.0.30: Starting Cluster (corosync)...
10.0.0.30: Starting Cluster (pacemaker)...
10.0.0.10: Starting Cluster (pacemaker)...
10.0.0.20: Starting Cluster (pacemaker)...
[[email protected] corosync]# corosync-cfgtool -s 
Printing ring status.
Local node ID 1
RING ID 0
	id	= 10.0.0.10
	status	= ring 0 active with no faults

  • #1

Hello,

I created a cluster with proxmox and when I run the command : pvecm add 192.168.0.25

on the node wich will join the cluster

I have this output on the command line :

Code:

Please enter superuser (root) password for '192.168.0.25': ****************************************
Establishing API connection with host '192.168.0.25'
The authenticity of host '192.168.0.25' can't be established.
X509 SHA256 key fingerprint is AA:A7:12:A3:16:FC:0A:B8:50:D5:84:8D:A7:66:E5:16:52:2B:AC:B0:AC:8F:98:E4:C3:4E:56:87:0D:18:D7:69.
Are you sure you want to continue connecting (yes/no)? yes
401 401 authentication failure

I have verified the time synchronisation, the same version…
And there is the same error with GUI interface.

Can you help me ?
Thanks for your future help

Moayad


  • #2

Hi,

Please delete know hosts rm ~/.ssh/known_hosts then try again.

  • #3

I have the same problem. Freshly installed Proxmox 6.2-6 machines.
Subscription + fully updated.

Joining second machine to first machine’s cluster via GUI gives error message:

Establishing API connection with host ‘192.168.194.11’
TASK ERROR: 401 401 authentication failure

I double checked the join information string. Removing known_hosts did not help as it does not exist on either pve1 or pve2 @Moayad

root@pve2:~# rm ~/.ssh/known_hosts
rm: cannot remove ‘/root/.ssh/known_hosts’: No such file or directory
root@pve2:~# ls -la /root/.ssh/
total 20
drwxr-xr-x 2 root root 4096 Jun 22 14:09 .
drwx—— 4 root root 4096 Jun 22 15:53 ..
lrwxrwxrwx 1 root root 29 Jun 22 14:09 authorized_keys -> /etc/pve/priv/authorized_keys
-rw-r—— 1 root root 117 Jun 22 14:09 config
-rw——- 1 root root 1811 Jun 22 14:09 id_rsa
-rw-r—r— 1 root root 391 Jun 22 14:09 id_rsa.pub

The relevant section in the logfile is just:

Jun 22 16:03:01 pve1 systemd[1]: Started Proxmox VE replication runner.
Jun 22 16:03:01 pve1 systemd[1]: Starting Cleanup of Temporary Directories…
Jun 22 16:03:01 pve1 systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Jun 22 16:03:01 pve1 systemd[1]: Started Cleanup of Temporary Directories.
Jun 22 16:03:06 pve1 pvedaemon[1297]: <root@pam> successful auth for user ‘root@pam’
Jun 22 16:03:29 pve1 IPCC.xs[1298]: pam_unix(common-auth:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost= user=root
Jun 22 16:03:31 pve1 pvedaemon[1298]: authentication failure; rhost=192.168.194.12 user=root@pam msg=Authentication failure
Jun 22 16:04:00 pve1 systemd[1]: Starting Proxmox VE replication runner…
Jun 22 16:04:01 pve1 systemd[1]: pvesr.service: Succeeded.
Jun 22 16:04:01 pve1 systemd[1]: Started Proxmox VE replication runner.
Jun 22 16:04:03 pve1 pvedaemon[1297]: <root@pam> successful auth for user ‘root@pam’

and

Jun 22 16:02:01 pve2 systemd[1]: Started Proxmox VE replication runner.
Jun 22 16:02:55 pve2 pvedaemon[1285]: <root@pam> successful auth for user ‘root@pam’
Jun 22 16:03:00 pve2 systemd[1]: Starting Proxmox VE replication runner…
Jun 22 16:03:01 pve2 systemd[1]: pvesr.service: Succeeded.
Jun 22 16:03:01 pve2 systemd[1]: Started Proxmox VE replication runner.
Jun 22 16:03:29 pve2 pvedaemon[1287]: <root@pam> starting task UPID: ve2:00000D2D:<removed>:<removed>:clusterjoin::root@pam:
Jun 22 16:03:29 pve2 systemd[1]: Starting Cleanup of Temporary Directories…
Jun 22 16:03:29 pve2 systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Jun 22 16:03:29 pve2 systemd[1]: Started Cleanup of Temporary Directories.
Jun 22 16:03:32 pve2 pvedaemon[3373]: 401 401 authentication failure
Jun 22 16:03:32 pve2 pvedaemon[1287]: <root@pam> end task UPID: ve2:00000D2D:<removed>:<removed>:clusterjoin::root@pam: 401 401 authentication failure
Jun 22 16:04:00 pve2 systemd[1]: Starting Proxmox VE replication runner…
Jun 22 16:04:01 pve2 systemd[1]: pvesr.service: Succeeded.

Last edited: Jun 22, 2020

  • #4

I have the same problem. Freshly installed Proxmox 6.2-6 machines.
Subscription + fully updated.

Joining second machine to first machine’s cluster via GUI gives error message:

Establishing API connection with host ‘192.168.194.11’
TASK ERROR: 401 401 authentication failure

[…]

Problem solved: Lastpass filled in the local machine’s root password, and I assumed it was decoded from the join string, that’s why I didn’t notice that the remote machine’s root password was wrong. Source: https://forum.proxmox.com/threads/cant-join-cluster-through-gui.68201/#post-321486

Last edited: Jun 22, 2020

  • #5

Hi,

Please delete know hosts rm ~/.ssh/known_hosts then try again.

I try this option but it does not work, it just makes an untimely disconnection and makes the error 401: No ticket

  • #7

Last edited: Feb 23, 2021

Понравилась статья? Поделить с друзьями:
  • Http error 401 the requested resource requires user authentication
  • Http error 400 что это значит
  • Http error 400 the size of the request headers is too long перевод
  • Http error 400 the size of the request headers is too long как исправить opera
  • Http error 400 the size of the request headers is too long iis