I think the error message is correct (Error: client: etcd cluster is unavailable or misconfigured
), but here is an explanation to save first-time users time:
This can happen when the etcd node addresses (‘endpoints’) are not published or are incorrect. The default behavior of etcdctl
is to overwrite the list of endpoints (which are specified, e.g., in the etcdctl
--endpoint
flag) using the list of published endpoints.
Assuming the IP address of one of the etcd nodes is 10.0.0.101
, there are at least three options:
- refrain from synchronizing with published addresses using the
--no-sync
option, e.g.,etcdctl --no-sync --endpoint http://10.0.0.101:2379 set /hello world
- use
curl
instead ofetcdctl
:- set:
curl -L -X PUT http://10.0.0.101:2379/v2/keys/hello -d value="world"
- get:
curl -L http://10.0.0.101:2379/v2/keys/hello
- set:
- publish the endpoints (make sure the listen-peer-urls and listen-client-urls are correct):
# kill etcd
sudo kill -9 "$(ps aux | grep etcd | grep -v grep | sed 's/^[^ ][^ ]*[ ][ ]*([0-9][0-9]*).*$/1/g')"
# start etcd (replace <token> with a generated token from, e.g., https://discovery.etcd.io/new?size=1)
etcd2 --name infra1 --initial-advertise-peer-urls http://10.0.0.101:2380
--listen-peer-urls http://10.0.0.101:2380
--listen-client-urls http://10.0.0.101:2379,http://127.0.0.1:2379
--advertise-client-urls http://10.0.0.101:2379
--discovery https://discovery.etcd.io/<token>
# try it now
etcdctl set /hello world
Содержание
- installation: Error: client: etcd cluster is unavailable or misconfigured #19235
- Comments
- Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: connect: connection refused #98149
- Comments
- client: etcd cluster is unavailable or misconfigured #40997
- Comments
- [ubuntu install error]client: etcd cluster is unavailable or misconfigured #33820
- Comments
- etcd cluster is unavailable or misconfigured #5789
- Comments
installation: Error: client: etcd cluster is unavailable or misconfigured #19235
/kubernetes/cluster
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory
Deploying master and node on machine 8.0.0.6
make-ca-cert.sh 100% 3270 3.2KB/s 00:00
config-default.sh 100% 3431 3.4KB/s 00:00
util.sh 100% 22KB 22.2KB/s 00:00
flanneld.conf 100% 577 0.6KB/s 00:00
kubelet.conf 100% 644 0.6KB/s 00:00
kube-proxy.conf 100% 684 0.7KB/s 00:00
kube-proxy 100% 2230 2.2KB/s 00:00
kubelet 100% 2155 2.1KB/s 00:00
flanneld 100% 2131 2.1KB/s 00:00
kube-apiserver.conf 100% 674 0.7KB/s 00:00
kube-scheduler.conf 100% 674 0.7KB/s 00:00
flanneld.conf 100% 568 0.6KB/s 00:00
kube-controller-manager.conf 100% 744 0.7KB/s 00:00
etcd.conf 100% 709 0.7KB/s 00:00
kube-scheduler 100% 2360 2.3KB/s 00:00
kube-controller-manager 100% 2672 2.6KB/s 00:00
kube-apiserver 100% 2358 2.3KB/s 00:00
etcd 100% 2073 2.0KB/s 00:00
flanneld 100% 2131 2.1KB/s 00:00
reconfDocker.sh 100% 2048 2.0KB/s 00:00
kube-scheduler 100% 18MB 17.5MB/s 00:00
kube-controller-manager 100% 36MB 35.8MB/s 00:00
etcdctl 100% 12MB 12.1MB/s 00:01
kube-apiserver 100% 43MB 43.4MB/s 00:00
etcd 100% 14MB 13.7MB/s 00:00
flanneld 100% 16MB 15.8MB/s 00:00
kube-proxy 100% 18MB 18.3MB/s 00:00
kubelet 100% 39MB 38.8MB/s 00:00
flanneld 100% 16MB 15.8MB/s 00:01
sudo: unable to resolve host ds2
etcd start/running, process 10293
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused
error #1: client: endpoint http://127.0.0.1:4001 exceeded header timeout
etcd cluster has no published client endpoints.
Try ‘—no-sync’ if you want to access non-published client endpoints(http://127.0.0.1:4001,http://127.0.0.1:2379).
Error: client: no endpoints available
Error: 100: Key not found (/coreos.com) [10]
The text was updated successfully, but these errors were encountered:
Источник
Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: connect: connection refused #98149
What happened:
Created a K8S cluster with 1 control plane node using kubeadm init —config /home/kube/kubeadmn-config.yaml —upload-certs
kubectl get pods —all-namespaces
Attempted to add a second control plane node using kubeadm join 10.1.50.250:6443 —token v585r6.6ayp6fvu65hmkx7z —discovery-token-ca-cert-hash sha256:bfeeda1d85202e28d4f944b0d40c53c3d3fadb347bca986fb7efa911510f82c7 —control-plane —certificate-key 033d5722def17eca68fa3d83a4aff82112955a06819b32f513be0440ac1d8b4c —v=5 .
Second node failed to add, also broke both nodes.
What you expected to happen:
The second control plane node gets added to the cluster.
How to reproduce it (as minimally and precisely as possible):
Copy above config file and run the create cluster command. Both control plane nodes run Ubuntu 18.04.
Anything else we need to know?:
etcdctl —version
cluster may be unhealthy: failed to list members
Error: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: connect: connection refused
; error #1: EOF
error #0: dial tcp 127.0.0.1:4001: connect: connection refused
error #1: EOF
Environment:
- Kubernetes version (use kubectl version ): 1.20.1
- Cloud provider or hardware configuration: Bare metal
- OS (e.g: cat /etc/os-release ): Ubuntu 18.04
- Kernel (e.g. uname -a ): 4.15.0-130-generic
The text was updated successfully, but these errors were encountered:
Источник
client: etcd cluster is unavailable or misconfigured #40997
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
I have been getting this error when trying to reach the kubernetes dashboard and this is happening few days after starting the cluster and slowly the frequency of crashes and unavailability of k8s dashboard time is also increasing.
Here is the log (snippet) that I got from etcd container k8s_etcd-container.fc80f341_etcd-server-events-ip-172-20-61-117.eu-central-1.compute.internal_kube-system_59da3ff0d3d20fb78afe7b5f9a9333bc_38757300 in this path /var/log/etcd.log
Kubernetes version (use kubectl version ):
Environment:
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
- Kernel (e.g. uname -a ): Linux ip-172-20-61-117 4.4.26-k8s #1 SMP Fri Oct 21 05:21:13 UTC 2016 x86_64 GNU/Linux
- Install tools: kops
- Others:
What happened:
Kubernetes api is not reachable.
What you expected to happen:
High availability of kubernetes api and dashboard
How to reproduce it (as minimally and precisely as possible):
Create a cluster using kops (https://github.com/kubernetes/kops/blob/master/docs/aws.md)
Anything else we need to know:
Have 1 master(t2.small) and 2 nodes(t2.medium)
Please help in resolving the issue and also let me know if you need more info.
The text was updated successfully, but these errors were encountered:
Источник
[ubuntu install error]client: etcd cluster is unavailable or misconfigured #33820
Hey guys,
I’m having the problem when trying to start the cluster.
Kubernetes version is 1.4.0 .
How can to solve it ? Thanks.
CMD: KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
. Starting cluster using provider: ubuntu
. calling verify-prereqs
Identity added: /home/zhigang/.ssh/id_rsa (/home/zhigang/.ssh/id_rsa)
. calling kube-up
/usr/local/develop/kubernetes/cluster/ubuntu /usr/local/develop/kubernetes/cluster
Prepare flannel 0.5.5 release .
Prepare etcd 2.3.1 release .
Prepare kubernetes 1.4.0 release .
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory
/usr/local/develop/kubernetes/cluster
Deploying master and node on machine 192.168.11.128
make-ca-cert.sh 100% 4136 4.0KB/s 00:00
easy-rsa.tar.gz 100% 42KB 42.4KB/s 00:00
config-default.sh 100% 5551 5.4KB/s 00:00
util.sh 100% 29KB 28.9KB/s 00:00
kube-proxy.conf 100% 688 0.7KB/s 00:00
kubelet.conf 100% 645 0.6KB/s 00:00
kubelet 100% 2158 2.1KB/s 00:00
kube-proxy 100% 2233 2.2KB/s 00:00
etcd.conf 100% 707 0.7KB/s 00:00
kube-apiserver.conf 100% 682 0.7KB/s 00:00
kube-scheduler.conf 100% 682 0.7KB/s 00:00
kube-controller-manager.conf 100% 761 0.7KB/s 00:00
kube-apiserver 100% 2358 2.3KB/s 00:00
kube-scheduler 100% 2360 2.3KB/s 00:00
kube-controller-manager 100% 2672 2.6KB/s 00:00
etcd 100% 2073 2.0KB/s 00:00
reconfDocker.sh 100% 2082 2.0KB/s 00:00
kube-apiserver 100% 144MB 48.1MB/s 00:03
kube-scheduler 100% 77MB 77.0MB/s 00:01
kube-controller-manager 100% 135MB 33.6MB/s 00:04
flanneld 100% 16MB 15.8MB/s 00:01
etcd 100% 16MB 15.9MB/s 00:00
etcdctl 100% 14MB 13.7MB/s 00:00
kubelet 100% 123MB 61.4MB/s 00:02
kube-proxy 100% 69MB 34.7MB/s 00:02
flanneld 100% 16MB 15.8MB/s 00:00
flanneld.conf 100% 579 0.6KB/s 00:00
flanneld 100% 2121 2.1KB/s 00:00
flanneld.conf 100% 570 0.6KB/s 00:00
flanneld 100% 2131 2.1KB/s 00:00
Warning: etcd.service changed on disk. Run ‘systemctl daemon-reload’ to reload units.
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 127.0.0.1:4001: getsockopt: connection refused
error #1: dial tcp 127.0.0.1:2379: getsockopt: connection refused
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused
CMD: systemctl status etcd.service
● etcd.service — LSB: Start distrubted key/value pair service
Loaded: loaded (/etc/init.d/etcd; bad; vendor preset: enabled)
Active: active (exited) since Thu 2016-09-29 22:06:37 PDT; 1h 21min ago
Docs: man:systemd-sysv-generator(8)
Process: 1192 ExecStart=/etc/init.d/etcd start (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
Memory: 0B
CPU: 0
Sep 29 22:06:37 ubuntu-01 systemd[1]: Starting LSB: Start distrubted key/value pair service.
Sep 29 22:06:37 ubuntu-01 etcd[1192]: * Starting Etcd: etcd
Sep 29 22:06:37 ubuntu-01 etcd[1192]: . done.
Sep 29 22:06:37 ubuntu-01 systemd[1]: Started LSB: Start distrubted key/value pair service.
Sep 29 22:06:37 ubuntu-01 etcd[1203]: error verifying flags, ‘>>’ is not a valid flag. See ‘etcd —help’.
Sep 29 22:48:23 ubuntu-01 systemd[1]: Started LSB: Start distrubted key/value pair service.
Sep 29 22:56:33 ubuntu-01 systemd[1]: Started LSB: Start distrubted key/value pair service.
Sep 29 23:21:43 ubuntu-01 systemd[1]: Started LSB: Start distrubted key/value pair service.
Warning: etcd.service changed on disk. Run ‘systemctl daemon-reload’ to reload units.
The text was updated successfully, but these errors were encountered:
Источник
etcd cluster is unavailable or misconfigured #5789
Hi all,
I am trying to install the kubespray on 2 virtual machines. The ansible playbook is executed from a third VM at the same subnet. All VMs have disabled the firewall, enabled ssh access between them and syncronized their time clocks with chrony.
After the end of the ansible run i got the error of :
Environment:
- Cloud provider or hardware configuration: Nodes are Openstack VMs
- OS ( printf «$(uname -srm)n$(cat /etc/os-release)n» ): Ubuntu 16.04
- Version of Ansible ( ansible —version ): 2.7.16
- Version of Python ( python —version ): 2.7.12
Kubespray version (commit) ( git rev-parse —short HEAD ): Latest git
Network plugin used: Default
Full inventory with variables ( ansible -i inventory/sample/inventory.ini all -m debug -a «var=hostvars[inventory_hostname]» ):
Command used to invoke ansible:
ansible-playbook -i inventory/mycluster/hosts.yaml —become —become-user=root cluster.yml
Output of ansible run:
https://gist.github.com/efotopoulou/7880356cca64bccc2f5d15bd2c1345fb
Anything else do we need to know:
After executing the script about collect info:
ansible-playbook -i inventory/mycluster/hosts.yaml -u ubuntu -e ansible_ssh_user=ubuntu -b —become-user=root -e dir= pwd scripts/collect-info.yaml
i got the output:
https://gist.github.com/efotopoulou/f4f2cee9e241594c16eb54ab832852d8
logs.tar.gz
Any help would be very much appreciated 🙂
The text was updated successfully, but these errors were encountered:
Источник
Have two servers : pg1: 10.80.80.195
and pg2: 10.80.80.196
Version of etcd :
etcd Version: 3.2.0
Git SHA: 66722b1
Go Version: go1.8.3
Go OS/Arch: linux/amd64
I’m trying to run like this :
pg1 server :
etcd --name infra0 --initial-advertise-peer-urls http://10.80.80.195:2380 --listen-peer-urls http://10.80.80.195:2380 --listen-client-urls http://10.80.80.195:2379,http://127.0.0.1:2379 --advertise-client-urls http://10.80.80.195:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster infra0=http://10.80.80.195:2380,infra1=http://10.80.80.196:2380 --initial-cluster-state new
pg2 server :
etcd --name infra1 --initial-advertise-peer-urls http://10.80.80.196:2380 --listen-peer-urls http://10.80.80.196:2380 --listen-client-urls http://10.80.80.196:2379,http://127.0.0.1:2379 --advertise-client-urls http://10.80.80.196:2379 --initial-cluster-token etcd-cluster-1 --initial-cluster infra0=http://10.80.80.195:2380,infra1=http://10.80.80.196:2380 --initial-cluster-state new
When trying to cherck health state on pg1
:
etcdctl cluster-health
have an error :
cluster may be unhealthy: failed to list members
Error: client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint http://127.0.0.1:2379 exceeded header timeout
; error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused
error #0: client: endpoint http://127.0.0.1:2379 exceeded header timeout
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused
What I’m doing wrong and how to fix it ?
Both servers run on virtual machines with Bridged Adapter
Skip to navigation
Skip to main content
Infrastructure and Management
- Red Hat Enterprise Linux
- Red Hat Virtualization
- Red Hat Identity Management
- Red Hat Directory Server
- Red Hat Certificate System
-
Red Hat Satellite
- Red Hat Subscription Management
- Red Hat Update Infrastructure
- Red Hat Insights
- Red Hat Ansible Automation Platform
Cloud Computing
- Red Hat OpenShift
- Red Hat CloudForms
- Red Hat OpenStack Platform
- Red Hat OpenShift Container Platform
- Red Hat OpenShift Data Science
- Red Hat OpenShift Online
-
Red Hat OpenShift Dedicated
- Red Hat Advanced Cluster Security for Kubernetes
- Red Hat Advanced Cluster Management for Kubernetes
- Red Hat Quay
- OpenShift Dev Spaces
- Red Hat OpenShift Service on AWS
Storage
- Red Hat Gluster Storage
- Red Hat Hyperconverged Infrastructure
- Red Hat Ceph Storage
- Red Hat OpenShift Data Foundation
Runtimes
-
Red Hat Runtimes
- Red Hat JBoss Enterprise Application Platform
- Red Hat Data Grid
- Red Hat JBoss Web Server
- Red Hat Single Sign On
- Red Hat support for Spring Boot
- Red Hat build of Node.js
- Red Hat build of Thorntail
- Red Hat build of Eclipse Vert.x
- Red Hat build of OpenJDK
-
Red Hat build of Quarkus
Integration and Automation
- Red Hat Process Automation
- Red Hat Process Automation Manager
- Red Hat Decision Manager
All Products
Issue
asb
pod inopenshift-ansible-service-broker
namespace has come intoCrashLoopBackOff
orError
with below log messages.
Using config file mounted to /etc/ansible-service-broker/config.yaml
============================================================
== Starting Ansible Service Broker... ==
============================================================
[2018-03-19T07:05:12.331Z] [NOTICE] Initializing clients...
[2018-03-19T07:05:12.333Z] [INFO] == ETCD CX ==
[2018-03-19T07:05:12.333Z] [INFO] EtcdHost: asb-etcd.openshift-ansible-service-broker.svc
[2018-03-19T07:05:12.333Z] [INFO] EtcdPort: 2379
[2018-03-19T07:05:12.333Z] [INFO] Endpoints: [https://asb-etcd.openshift-ansible-service-broker.svc:2379]
[2018-03-19T07:05:12.343Z] [ERROR] client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate signed by unknown authority
Environment
Red Hat OpenShift Container Platform 3.7
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.
Current Customers and Partners
Log in for full access
Log In
1. What kops
version are you running? The command kops version
, will display
this information.
Version 1.12.2
2. What Kubernetes version are you running? kubectl version
will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops
flag.
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T14:25:20Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:41:47Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
3. What cloud provider are you using?
aws
4. What commands did you run? What is the simplest way to reproduce this issue?
kops update cluster
kops rolling-update
5. What happened after the commands executed?
kube-controller-manager reports the following error while
Unable to sync caches for garbage collector controller
Failed to list <nil>: client: etcd cluster is unavailable or misconfigured; error #0: net/http: HTTP/1.x transport connection broken: malformed HTTP response "x15x03x01x00x02x02"
6. What did you expect to happen?
Upgrade kubernetes
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2019-05-14T16:15:12Z
name: k8s.runningenv.cloud
spec:
api:
dns: {}
loadBalancer:
sslCertificate: arn:aws:acm:ca-central-1:111111111111:certificate/ccxcccss112-ddw-ddwd-ddsds-ddsds3232
type: Public
authentication:
aws: {}
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://dummy-data-store/k8s.runningenv.cloud
etcdClusters:
- etcdMembers:
- instanceGroup: master-ca-central-1a
name: a
name: main
- etcdMembers:
- instanceGroup: master-ca-central-1a
name: a
name: events
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 1.1.1.1/32
- 2.2.2.2/32
kubernetesVersion: 1.12.8
masterInternalName: api.internal.k8s.runningenv.cloud
masterPublicName: api.k8s.runningenv.cloud
networkCIDR: 172.20.0.0/16
networking:
amazonvpc: {}
nonMasqueradeCIDR: 172.20.0.0/16
sshAccess:
- 1.1.1.1/32
- 2.2.2.2/32
subnets:
- cidr: 172.20.32.0/19
name: ca-central-1a
type: Public
zone: ca-central-1a
- cidr: 172.20.64.0/19
name: ca-central-1b
type: Public
zone: ca-central-1b
topology:
dns:
type: Public
masters: public
nodes: public
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2019-07-19T16:24:11Z
labels:
kops.k8s.io/cluster: k8s.runningenv.cloud
name: large_nodes
spec:
image: kope.io/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13
machineType: t3.xlarge
maxSize: 2
minSize: 2
nodeLabels:
kops.k8s.io/instancegroup: large_nodes
role: Node
rootVolumeSize: 64
subnets:
- ca-central-1a
- ca-central-1b
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2019-05-14T16:15:13Z
labels:
kops.k8s.io/cluster: k8s.runningenv.cloud
name: master-ca-central-1a
spec:
image: kope.io/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13
machineType: c4.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-ca-central-1a
role: Master
subnets:
- ca-central-1a
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2019-05-14T16:15:13Z
labels:
kops.k8s.io/cluster: k8s.runningenv.cloud
name: nodes
spec:
image: kope.io/k8s-1.12-debian-stretch-amd64-hvm-ebs-2019-05-13
machineType: t3.large
maxSize: 5
minSize: 5
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
subnets:
- ca-central-1a
- ca-central-1b
8. Please run the commands with most verbose logging by adding the -v 10
flag.
Paste the logs into this report, or in a gist and provide the gist link here.
Logs from kube-controller-manager
I0723 17:05:09.115034 1 shared_informer.go:119] stop requested
E0723 17:05:09.115219 1 controller_utils.go:1030] Unable to sync caches for garbage collector controller
E0723 17:05:09.115373 1 garbagecollector.go:233] timed out waiting for dependency graph builder sync during GC sync (attempt 300)
E0723 17:05:09.130184 1 reflector.go:125] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: client: etcd cluster is unavailable or misconfigured; error #0: net/http: HTTP/1.x transport connection broken: malformed HTTP response "x15x03x01x00x02x02"
E0723 17:05:09.144553 1 reflector.go:125] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: client: etcd cluster is unavailable or misconfigured; error #0: net/http: HTTP/1.x transport connection broken: malformed HTTP response "x15x03x01x00x02x02"
I0723 17:05:09.416248 1 garbagecollector.go:204] syncing garbage collector with updated resources from discovery (attempt 301): added: [{Group:auth.kope.io Version:v1alpha1 Resource:users} {Group:config.auth.kope.io Version:v1alpha1 Resource:authconfigurations} {Group:config.auth.kope.io Version:v1alpha1 Resource:authproviders}], removed: []
I0723 17:05:09.618069 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
E0723 17:05:10.133851 1 reflector.go:125] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: Failed to list <nil>: client: etcd cluster is unavailable or misconfigured; error #0: net/http: HTTP/1.x transport connection broken: malformed HTTP response "x15x03x01x00x02x02"
E0723 17:05:10.152706 1 reflector.go:125]
9. Anything else do we need to know?
Tried connecting to etcd and this was the response
root@ip-172-20-48-95:/tmp/etcd-download-test# ./etcdctl --endpoints https://127.0.0.1:2380 --cert-file=/etc/kubernetes/pki/kube-apiserver/etcd-client.crt --key-file=/etc/kubernetes/pki/kube-apiserver/etcd-client.key --ca-file=/etc/kubernetes/pki/kube-apiserver/etcd-ca.crt --debug cluster-health
Cluster-Endpoints: https://127.0.0.1:2380
cURL Command: curl -X GET https://127.0.0.1:2380/v2/members
cluster may be unhealthy: failed to list members
Error: client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate signed by unknown authority
error #0: x509: certificate signed by unknown authority
kubectl get cs
response
I0723 13:16:52.688710 95389 request.go:947] Response Body: {"kind":"ComponentStatusList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/componentstatuses"},"items":[{"metadata":{"name":"scheduler","selfLink":"/api/v1/componentstatuses/scheduler","creationTimestamp":null},"conditions":[{"type":"Healthy","status":"True","message":"ok"}]},{"metadata":{"name":"controller-manager","selfLink":"/api/v1/componentstatuses/controller-manager","creationTimestamp":null},"conditions":[{"type":"Healthy","status":"True","message":"ok"}]},{"metadata":{"name":"etcd-1","selfLink":"/api/v1/componentstatuses/etcd-1","creationTimestamp":null},"conditions":[{"type":"Healthy","status":"True","message":"{"health": "true"}"}]},{"metadata":{"name":"etcd-0","selfLink":"/api/v1/componentstatuses/etcd-0","creationTimestamp":null},"conditions":[{"type":"Healthy","status":"True","message":"{"health": "true"}"}]}]}
I0723 13:16:52.693112 95389 table_printer.go:43] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object into a v1beta1.Table
I0723 13:16:52.693491 95389 table_printer.go:43] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object into a v1beta1.Table
I0723 13:16:52.693534 95389 table_printer.go:43] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object into a v1beta1.Table
I0723 13:16:52.693556 95389 table_printer.go:43] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object into a v1beta1.Table
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}