Error while waiting for mnesia tables

Which chart: bitnami/rabbitmq@7.0.0 Describe the bug It seems whenever the StatefulSet needs to update, but still has an existing PVC, we get stuck in a loop during initialization with the followin...

I seem to be encountering this error when my cluster hits its disk_free_limit now, too (chart v7.0.3).

I just ran a stress test where I loaded messages in until I hit the disk_free_limit (25GB, 128GB total disk). When the limit was hit, I see the blocked message in the logs:

rabbitmq-1 rabbitmq 2020-06-24 21:50:47.355 [info] <0.384.0> Free disk space is insufficient. Free bytes: 26536415232. Limit: 26843545600
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.355 [warning] <0.380.0> disk resource limit alarm set on node 'rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local'.
rabbitmq-1 rabbitmq
rabbitmq-1 rabbitmq **********************************************************
rabbitmq-1 rabbitmq *** Publishers will be blocked until this alarm clears ***
rabbitmq-1 rabbitmq **********************************************************

Followed by what appears to be a reboot sequence:

rabbitmq-1 rabbitmq 2020-06-24 21:50:47.355 [info] <0.268.0> Running boot step code_server_cache defined by app rabbit
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.355 [info] <0.268.0> Running boot step file_handle_cache defined by app rabbit
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.355 [info] <0.387.0> Limiting to approx 1048479 file handles (943629 sockets)
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.355 [info] <0.388.0> FHC read buffering:  OFF
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.355 [info] <0.388.0> FHC write buffering: ON
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.356 [info] <0.268.0> Running boot step worker_pool defined by app rabbit
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.356 [info] <0.377.0> Will use 8 processes for default worker pool
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.356 [info] <0.377.0> Starting worker pool 'worker_pool' with 8 processes in it
rabbitmq-1 rabbitmq 2020-06-24 21:50:47.356 [info] <0.268.0> Running boot step database defined by app rabbit

And finally culminating in this Mnesia table issue:

rabbitmq-1 rabbitmq 2020-06-24 21:50:47.364 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:51:17.365 [warning] <0.268.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],[rabbit_durable_queue]}
rabbitmq-1 rabbitmq 2020-06-24 21:51:17.365 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 8 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:51:47.366 [warning] <0.268.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],[rabbit_durable_queue]}
rabbitmq-1 rabbitmq 2020-06-24 21:51:47.366 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 7 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:52:17.367 [warning] <0.268.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],[rabbit_durable_queue]}
rabbitmq-1 rabbitmq 2020-06-24 21:52:17.367 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 6 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:52:47.368 [warning] <0.268.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],[rabbit_durable_queue]}
rabbitmq-1 rabbitmq 2020-06-24 21:52:47.368 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 5 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:53:17.369 [warning] <0.268.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],[rabbit_durable_queue]}
rabbitmq-1 rabbitmq 2020-06-24 21:53:17.369 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 4 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:53:47.370 [warning] <0.268.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],[rabbit_durable_queue]}
rabbitmq-1 rabbitmq 2020-06-24 21:53:47.370 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 3 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.744 [info] <0.60.0> SIGTERM received - shutting down
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.745 [warning] <0.268.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],[rabbit_durable_queue]}
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.745 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 2 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.745 [warning] <0.268.0> Error while waiting for Mnesia tables: {failed_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],{node_not_running,'rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local'}}
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.746 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 1 retries left
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.746 [warning] <0.268.0> Error while waiting for Mnesia tables: {failed_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],{node_not_running,'rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local'}}
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.746 [info] <0.268.0> Waiting for Mnesia tables for 30000 ms, 0 retries left

I also see some other errors below this, but I am not sure if they are relevant:

rabbitmq-1 rabbitmq 2020-06-24 21:54:08.746 [error] <0.268.0> Feature flag `quorum_queue`: migration function crashed: {error,{failed_waiting_for_tables,['rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local','rabbit@rabbitmq-0.rabbitmq-headless.mhill.svc.cluster.local'],{node_not_running,'rabbit@rabbitmq-1.rabbitmq-headless.mhill.svc.cluster.local'}}}
rabbitmq-1 rabbitmq [{rabbit_table,wait,3,[{file,"src/rabbit_table.erl"},{line,120}]},{rabbit_core_ff,quorum_queue_migration,3,[{file,"src/rabbit_core_ff.erl"},{line,60}]},{rabbit_feature_flags,run_migration_fun,3,[{file,"src/rabbit_feature_flags.erl"},{line,1611}]},{rabbit_feature_flags,'-verify_which_feature_flags_are_actually_enabled/0-fun-2-',3,[{file,"src/rabbit_feature_flags.erl"},{line,2278}]},{maps,fold_1,3,[{file,"maps.erl"},{line,232}]},{rabbit_feature_flags,verify_which_feature_flags_are_actually_enabled,0,[{file,"src/rabbit_feature_flags.erl"},{line,2276}]},{rabbit_feature_flags,sync_feature_flags_with_cluster,3,[{file,"src/rabbit_feature_flags.erl"},{line,2091}]},{rabbit_mnesia,ensure_feature_flags_are_in_sync,2,[{file,"src/rabbit_mnesia.erl"},{line,656}]}]
rabbitmq-1 rabbitmq 2020-06-24 21:54:08.746 [error] <0.268.0> Feature flag `virtual_host_metadata`: migration function crashed: {aborted,{no_exists,rabbit_vhost,attributes}}
rabbitmq-1 rabbitmq [{mnesia,abort,1,[{file,"mnesia.erl"},{line,355}]},{rabbit_core_ff,virtual_host_metadata_migration,3,[{file,"src/rabbit_core_ff.erl"},{line,123}]},{rabbit_feature_flags,run_migration_fun,3,[{file,"src/rabbit_feature_flags.erl"},{line,1611}]},{rabbit_feature_flags,'-verify_which_feature_flags_are_actually_enabled/0-fun-2-',3,[{file,"src/rabbit_feature_flags.erl"},{line,2278}]},{maps,fold_1,3,[{file,"maps.erl"},{line,232}]},{rabbit_feature_flags,verify_which_feature_flags_are_actually_enabled,0,[{file,"src/rabbit_feature_flags.erl"},{line,2276}]},{rabbit_feature_flags,sync_feature_flags_with_cluster,3,[{file,"src/rabbit_feature_flags.erl"},{line,2091}]},{rabbit_mnesia,ensure_feature_flags_are_in_sync,2,[{file,"src/rabbit_mnesia.erl"},{line,656}]}]

This is pretty concerning as it seems to indicate that, if I ever hit my disk alarm, the entire cluster is going to crash and enter an unrecoverable state that requires a complete reinstall of the chart. If this is the case, this would mean the system is completely unusable for my needs.

Am I missing something obvious here?

I have installed rabbitmq using helm chart on a kubernetes cluster. The rabbitmq pod keeps restarting. On inspecting the pod logs I get the below error

2020-02-26 04:42:31.582 [warning] <0.314.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_durable_queue]}
2020-02-26 04:42:31.582 [info] <0.314.0> Waiting for Mnesia tables for 30000 ms, 6 retries left

When I try to do kubectl describe pod I get this error

Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-rabbitmq-0
    ReadOnly:   false
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-config
    Optional:  false
  healthchecks:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-healthchecks
    Optional:  false
  rabbitmq-token-w74kb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rabbitmq-token-w74kb
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/arch=amd64
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                      From                                               Message
  ----     ------     ----                     ----                                               -------
  Warning  Unhealthy  3m27s (x878 over 7h21m)  kubelet, gke-analytics-default-pool-918f5943-w0t0  Readiness probe failed: Timeout: 70 seconds ...
Checking health of node [email protected] ...
Status of node [email protected] ...
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}

I have provisioned the above on Google Cloud on a kubernetes cluster. I am not sure during what specific situation it started failing. I had to restart the pod and since then it has been failing.

What is the issue here ?

Я установил rabbitmq с помощью Helm Chart в кластере kubernetes. Модуль rabbitmq продолжает перезапуск. При проверке журналов модуля я получаю следующую ошибку

2020-02-26 04:42:31.582 [warning] <0.314.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_durable_queue]}
2020-02-26 04:42:31.582 [info] <0.314.0> Waiting for Mnesia tables for 30000 ms, 6 retries left

Когда я пытаюсь выполнить kubectl describe pod, я получаю эту ошибку

Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-rabbitmq-0
    ReadOnly:   false
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-config
    Optional:  false
  healthchecks:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      rabbitmq-healthchecks
    Optional:  false
  rabbitmq-token-w74kb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rabbitmq-token-w74kb
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/arch=amd64
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                      From                                               Message
  ----     ------     ----                     ----                                               -------
  Warning  Unhealthy  3m27s (x878 over 7h21m)  kubelet, gke-analytics-default-pool-918f5943-w0t0  Readiness probe failed: Timeout: 70 seconds ...
Checking health of node rabbit@rabbitmq-0.rabbitmq-headless.default.svc.cluster.local ...
Status of node rabbit@rabbitmq-0.rabbitmq-headless.default.svc.cluster.local ...
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}
Error:
{:aborted, {:no_exists, [:rabbit_vhost, [{{:vhost, :"$1", :_, :_}, [], [:"$1"]}]]}}

Я подготовил вышеуказанное в Google Cloud в кластере кубернетов. Я не уверен, в какой конкретной ситуации он начал давать сбой. Мне пришлось перезапустить модуль, и с тех пор он не работает.

В чем проблема?

6 ответов

Лучший ответ

Протестируйте это развертывание:

kind: Service
apiVersion: v1
metadata:
  namespace: rabbitmq-namespace
  name: rabbitmq
  labels:
    app: rabbitmq
    type: LoadBalancer  
spec:
  type: NodePort
  ports:
   - name: http
     protocol: TCP
     port: 15672
     targetPort: 15672
     nodePort: 31672
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
     nodePort: 30672
   - name: stomp
     protocol: TCP
     port: 61613
     targetPort: 61613
  selector:
    app: rabbitmq
---
kind: Service 
apiVersion: v1
metadata:
  namespace: rabbitmq-namespace
  name: rabbitmq-lb
  labels:
    app: rabbitmq
spec:
  # Headless service to give the StatefulSet a DNS which is known in the cluster (hostname-#.app.namespace.svc.cluster.local, )
  # in our case - rabbitmq-#.rabbitmq.rabbitmq-namespace.svc.cluster.local  
  clusterIP: None
  ports:
   - name: http
     protocol: TCP
     port: 15672
     targetPort: 15672
   - name: amqp
     protocol: TCP
     port: 5672
     targetPort: 5672
   - name: stomp
     port: 61613
  selector:
    app: rabbitmq
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: rabbitmq-config
  namespace: rabbitmq-namespace
data:
  enabled_plugins: |
      [rabbitmq_management,rabbitmq_peer_discovery_k8s,rabbitmq_stomp].

  rabbitmq.conf: |
      ## Cluster formation. See http://www.rabbitmq.com/cluster-formation.html to learn more.
      cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
      cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
      ## Should RabbitMQ node name be computed from the pod's hostname or IP address?
      ## IP addresses are not stable, so using [stable] hostnames is recommended when possible.
      ## Set to "hostname" to use pod hostnames.
      ## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME
      ## environment variable.
      cluster_formation.k8s.address_type = hostname   
      ## Important - this is the suffix of the hostname, as each node gets "rabbitmq-#", we need to tell what's the suffix
      ## it will give each new node that enters the way to contact the other peer node and join the cluster (if using hostname)
      cluster_formation.k8s.hostname_suffix = .rabbitmq.rabbitmq-namespace.svc.cluster.local
      ## How often should node cleanup checks run?
      cluster_formation.node_cleanup.interval = 30
      ## Set to false if automatic removal of unknown/absent nodes
      ## is desired. This can be dangerous, see
      ##  * http://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup
      ##  * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ
      cluster_formation.node_cleanup.only_log_warning = true
      cluster_partition_handling = autoheal
      ## See http://www.rabbitmq.com/ha.html#master-migration-data-locality
      queue_master_locator=min-masters
      ## See http://www.rabbitmq.com/access-control.html#loopback-users
      loopback_users.guest = false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
  namespace: rabbitmq-namespace
spec:
  serviceName: rabbitmq
  replicas: 3
  selector:
    matchLabels:
      name: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
        name: rabbitmq
        state: rabbitmq
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      serviceAccountName: rabbitmq
      terminationGracePeriodSeconds: 10
      containers:        
      - name: rabbitmq-k8s
        image: rabbitmq:3.8.3
        volumeMounts:
          - name: config-volume
            mountPath: /etc/rabbitmq
          - name: data
            mountPath: /var/lib/rabbitmq/mnesia
        ports:
          - name: http
            protocol: TCP
            containerPort: 15672
          - name: amqp
            protocol: TCP
            containerPort: 5672
        livenessProbe:
          exec:
            command: ["rabbitmqctl", "status"]
          initialDelaySeconds: 60
          periodSeconds: 60
          timeoutSeconds: 10
        resources:
            requests:
              memory: "0"
              cpu: "0"
            limits:
              memory: "2048Mi"
              cpu: "1000m"
        readinessProbe:
          exec:
            command: ["rabbitmqctl", "status"]
          initialDelaySeconds: 20
          periodSeconds: 60
          timeoutSeconds: 10
        imagePullPolicy: Always
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: HOSTNAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: RABBITMQ_USE_LONGNAME
            value: "true"
          # See a note on cluster_formation.k8s.address_type in the config file section
          - name: RABBITMQ_NODENAME
            value: "rabbit@$(HOSTNAME).rabbitmq.$(NAMESPACE).svc.cluster.local"
          - name: K8S_SERVICE_NAME
            value: "rabbitmq"
          - name: RABBITMQ_ERLANG_COOKIE
            value: "mycookie"      
      volumes:
        - name: config-volume
          configMap:
            name: rabbitmq-config
            items:
            - key: rabbitmq.conf
              path: rabbitmq.conf
            - key: enabled_plugins
              path: enabled_plugins
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
        - "ReadWriteOnce"
      storageClassName: "default"
      resources:
        requests:
          storage: 3Gi

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rabbitmq 
  namespace: rabbitmq-namespace 
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: rabbitmq-namespace 
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: rabbitmq-namespace
subjects:
- kind: ServiceAccount
  name: rabbitmq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader


-7

Amir Soleimani Borujerdi
19 Июл 2020 в 14:25

Просто удалил существующее требование постоянного тома и переустановил rabbitmq, и он начал работать.

Таким образом, каждый раз после установки rabbitmq в кластер kubernetes, если я уменьшаю количество модулей до 0, и когда я увеличиваю количество модулей позже, я получаю ту же ошибку. Я также попытался удалить Persistent Volume Claim, не удаляя диаграмму управления rabbitmq, но все равно с той же ошибкой.

Таким образом, кажется, что каждый раз, когда я уменьшаю кластер до 0, мне нужно удалить диаграмму управления rabbitmq, удалить соответствующие утверждения постоянного объема и каждый раз устанавливать диаграмму управления rabbitmq, чтобы она работала.


13

jeril
27 Фев 2020 в 07:20

Я также получил ошибку, подобную приведенной ниже.

2020-06-05 03: 45: 37.153 [info] <0.234.0> Ожидание таблиц Mnesia в течение 30000 мс, осталось 9 попыток 2020-06-05 03: 46: 07.154 [предупреждение] <0.234.0> Ошибка при ожидании для таблиц Mnesia: {timeout_waiting_for_tables, [rabbit_user, rabbit_user_permission, rabbit_topic_permission, rabbit_vhost, rabbit_durable_route, rabbit_durable_exchange, rabbit_runtime_parameters, rabbit_durable_queue]} 2020-06-05 03: 46: 07.100 мс для таблиц Mnesia для 0.234.0 Ms для таблиц Mnesia для 0.234.0 [информация] , Осталось 8 попыток

В моем случае не работал подчиненный узел (сервер) кластера RabbitMQ. Как только я запустил подчиненный узел, главный узел запустился без ошибок.


0

Muhammad Dyas Yaskur
5 Июн 2020 в 20:05

В моем случае решение было простым

Шаг 1. Уменьшите масштаб набора состояний, при этом PVC не будет удален.

kubectl scale statefulsets rabbitmq-1-rabbitmq --namespace teps-rabbitmq --replicas=1

Шаг 2. Доступ к модулю RabbitMQ.

kubectl exec -it rabbitmq-1-rabbitmq-0 -n Rabbit

Шаг 3. Сбросьте кластер

rabbitmqctl stop_app
rabbitmqctl force_boot

Шаг 4. Измените масштаб набора состояний

  kubectl scale statefulsets rabbitmq-1-rabbitmq --namespace teps-rabbitmq --replicas=4


1

Ali Ahmad
8 Дек 2021 в 09:44

TL; DR

helm upgrade rabbitmq --set clustering.forceBoot=true

Проблема

Проблема возникает по следующей причине:

  • Все поды RMQ завершаются одновременно по какой-то причине (возможно, из-за того, что вы явно установили реплики StatefulSet на 0 или что-то еще)
  • Один из них останавливается последним (может быть, чуть позже остальных). Он сохраняет это условие («Теперь я автономный») в своей файловой системе, которая в k8s является PersistentVolume (Claim). Скажем, это стручок rabbitmq-1.
  • Когда вы запускаете StatefulSet, pod rabbitmq-0 всегда запускается первым (см. здесь ).
  • Во время запуска pod rabbitmq-0 сначала проверяет, должен ли он работать автономно. Но насколько он может видеть в собственной файловой системе, это часть кластера. Таким образом, он проверяет своих сверстников и не находит их. Это по умолчанию приводит к сбою при запуске.
  • rabbitmq-0, таким образом, никогда не становится готовым.
  • rabbitmq-1 никогда не запускается, потому что именно так развертываются StatefulSets — один за другим. Если бы он запустился, он бы запустился успешно, потому что видит, что он также может работать автономно.

Так что, в конце концов, есть небольшое несоответствие между работой RabbitMQ и StatefulSets. RMQ говорит: «Если все выйдет из строя, просто запустите все в одно и то же время, один сможет начать, а как только этот будет запущен, остальные смогут снова присоединиться к кластеру». k8s StatefulSets говорит: «запустить все сразу невозможно, мы начнем с 0».

Решение

Чтобы исправить это, существует команда force_boot для rabbitmqctl, которая в основном сообщает экземпляру запускать автономно, если он не находит аналогов. Как вы можете использовать это в Kubernetes, зависит от используемой диаграммы Helm и контейнера. В Bitnami Chart, где используется образ Bitnami Docker, есть значение clustering.forceBoot = true, которое преобразуется в переменную env RABBITMQ_FORCE_BOOT = yes в контейнере, который затем выдаст вам указанную выше команду.

Но глядя на проблему, вы также можете понять, почему удаление PVC будет работать (другой ответ). Модули просто все «забудут», что они были частью кластера RMQ в прошлый раз, и с радостью начнут работу. Однако я бы предпочел вышеуказанное решение, поскольку данные не теряются.


40

Ulli
10 Мар 2021 в 18:05

ЕСЛИ вы находитесь в том же сценарии, что и я, и вы не знаете, кто развернул диаграмму руля и как она была развернута… вы можете напрямую отредактировать statefulset, чтобы не испортить больше вещей..

Я смог заставить его работать, не удаляя helm_chart

kubectl -n rabbitmq edit statefulsets.apps rabbitmq

В разделе спецификации я добавил следующую переменную env RABBITMQ_FORCE_BOOT = yes:

    spec:
      containers:
      - env:
        - name: RABBITMQ_FORCE_BOOT # New Line 1 Added
          value: "yes"              # New Line 2 Added

И это также должно решить проблему… пожалуйста, сначала попробуйте сделать это правильно, как объяснил Улли выше.


1

vidbaz
22 Мар 2022 в 00:54

Я пытаюсь вызвать пользовательский интерфейс rabbitmq, вытащив изображение, а затем запустив его с помощью докера.

Docker run —name rabbit-p -p 15672:15672 -p 5672:5672 rabbitmq:latest

Я даже пытался использовать docker compose. Ниже мой файл docker-compose.yaml

version: '3'
services:
  messaging:
    image: "messaging-producer"
    ports: 
     - "7878:9876"
  rabbitmq:
    image: "rabbitmq:latest"
    ports: 
     - "15762:15762"
     - "5672:5672"

В обоих случаях показывает Server startup complete но когда я пытаюсь ударить
http://localhost:15672/ там написано This page isn’t working

Когда я запустил команду в первый раз, она работала отлично, но после этого она перестала работать.

Ниже приведен журнал-

 Starting RabbitMQ 3.7.16 on Erlang 22.0.7
 Copyright (C) 2007-2019 Pivotal Software, Inc.
 Licensed under the MPL.  See https://www.rabbitmq.com/

  ##  ##
  ##  ##      RabbitMQ 3.7.16. Copyright (C) 2007-2019 Pivotal Software, Inc.
  ##########  Licensed under the MPL.  See https://www.rabbitmq.com/
  ######  ##
  ##########  Logs: <stdout>

              Starting broker...
2019-07-23 05:34:00.741 [info] <0.218.0>
 node           : rabbit@a9bf56e20b16
 home dir       : /var/lib/rabbitmq
 config file(s) : /etc/rabbitmq/rabbitmq.conf
 cookie hash    : Z1whNqA2n91M8s2sMDqaOA==
 log(s)         : <stdout>
 database dir   : /var/lib/rabbitmq/mnesia/rabbit@a9bf56e20b16
2019-07-23 05:34:02.237 [info] <0.218.0> Running boot step pre_boot defined by app rabbit
2019-07-23 05:34:02.237 [info] <0.218.0> Running boot step rabbit_core_metrics defined by app rabbit
2019-07-23 05:34:02.238 [info] <0.218.0> Running boot step rabbit_alarm defined by app rabbit
2019-07-23 05:34:02.243 [info] <0.226.0> Memory high watermark set to 792 MiB (830613094 bytes) of 1980 MiB (2076532736 bytes) total
2019-07-23 05:34:02.247 [info] <0.228.0> Enabling free disk space monitoring
2019-07-23 05:34:02.247 [info] <0.228.0> Disk free limit set to 50MB
2019-07-23 05:34:02.251 [info] <0.218.0> Running boot step code_server_cache defined by app rabbit
2019-07-23 05:34:02.251 [info] <0.218.0> Running boot step file_handle_cache defined by app rabbit
2019-07-23 05:34:02.251 [info] <0.231.0> Limiting to approx 1048476 file handles (943626 sockets)
2019-07-23 05:34:02.251 [info] <0.232.0> FHC read buffering:  OFF
2019-07-23 05:34:02.251 [info] <0.232.0> FHC write buffering: ON
2019-07-23 05:34:02.252 [info] <0.218.0> Running boot step worker_pool defined by app rabbit
2019-07-23 05:34:02.253 [info] <0.219.0> Will use 2 processes for default worker pool
2019-07-23 05:34:02.253 [info] <0.219.0> Starting worker pool 'worker_pool' with 2 processes in it
2019-07-23 05:34:02.253 [info] <0.218.0> Running boot step database defined by app rabbit
2019-07-23 05:34:02.254 [info] <0.218.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@a9bf56e20b16 is empty. Assuming we need to join an existing cluster or initialise from scratch...
2019-07-23 05:34:02.254 [info] <0.218.0> Configured peer discovery backend: rabbit_peer_discovery_classic_config
2019-07-23 05:34:02.254 [info] <0.218.0> Will try to lock with peer discovery backend rabbit_peer_discovery_classic_config
2019-07-23 05:34:02.254 [info] <0.218.0> Peer discovery backend does not support locking, falling back to randomized delay
2019-07-23 05:34:02.254 [info] <0.218.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping randomized startup delay.
2019-07-23 05:34:02.254 [info] <0.218.0> All discovered existing cluster peers:
2019-07-23 05:34:02.254 [info] <0.218.0> Discovered no peer nodes to cluster with
2019-07-23 05:34:02.256 [info] <0.43.0> Application mnesia exited with reason: stopped
2019-07-23 05:34:02.385 [info] <0.218.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2019-07-23 05:34:02.412 [info] <0.218.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2019-07-23 05:34:02.445 [info] <0.218.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2019-07-23 05:34:02.445 [info] <0.218.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping registration.
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step database_sync defined by app rabbit
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step codec_correctness_check defined by app rabbit
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step external_infrastructure defined by app rabbit
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step rabbit_registry defined by app rabbit
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step rabbit_queue_location_random defined by app rabbit
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step rabbit_event defined by app rabbit
2019-07-23 05:34:02.445 [info] <0.218.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_exchange_type_direct defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_exchange_type_headers defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_exchange_type_topic defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Running boot step rabbit_priority_queue defined by app rabbit
2019-07-23 05:34:02.446 [info] <0.218.0> Priority queues enabled, real BQ is rabbit_variable_queue
2019-07-23 05:34:02.447 [info] <0.218.0> Running boot step rabbit_queue_location_client_local defined by app rabbit
2019-07-23 05:34:02.447 [info] <0.218.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit
2019-07-23 05:34:02.447 [info] <0.218.0> Running boot step kernel_ready defined by app rabbit
2019-07-23 05:34:02.447 [info] <0.218.0> Running boot step rabbit_sysmon_minder defined by app rabbit
2019-07-23 05:34:02.447 [info] <0.218.0> Running boot step rabbit_epmd_monitor defined by app rabbit
2019-07-23 05:34:02.452 [info] <0.218.0> Running boot step guid_generator defined by app rabbit
2019-07-23 05:34:02.456 [info] <0.218.0> Running boot step rabbit_node_monitor defined by app rabbit
2019-07-23 05:34:02.456 [info] <0.402.0> Starting rabbit_node_monitor
2019-07-23 05:34:02.457 [info] <0.218.0> Running boot step delegate_sup defined by app rabbit
2019-07-23 05:34:02.457 [info] <0.218.0> Running boot step rabbit_memory_monitor defined by app rabbit
2019-07-23 05:34:02.457 [info] <0.218.0> Running boot step core_initialized defined by app rabbit
2019-07-23 05:34:02.457 [info] <0.218.0> Running boot step upgrade_queues defined by app rabbit
2019-07-23 05:34:02.482 [info] <0.218.0> message_store upgrades: 1 to apply
2019-07-23 05:34:02.483 [info] <0.218.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store
2019-07-23 05:34:02.483 [info] <0.218.0> message_store upgrades: No durable queues found. Skipping message store migration
2019-07-23 05:34:02.483 [info] <0.218.0> message_store upgrades: Removing the old message store data
2019-07-23 05:34:02.484 [info] <0.218.0> message_store upgrades: All upgrades applied successfully
2019-07-23 05:34:02.514 [info] <0.218.0> Running boot step rabbit_connection_tracking defined by app rabbit
2019-07-23 05:34:02.514 [info] <0.218.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit
2019-07-23 05:34:02.514 [info] <0.218.0> Running boot step rabbit_exchange_parameters defined by app rabbit
2019-07-23 05:34:02.515 [info] <0.218.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit
2019-07-23 05:34:02.517 [info] <0.218.0> Running boot step rabbit_policies defined by app rabbit
2019-07-23 05:34:02.518 [info] <0.218.0> Running boot step rabbit_policy defined by app rabbit
2019-07-23 05:34:02.518 [info] <0.218.0> Running boot step rabbit_queue_location_validator defined by app rabbit
2019-07-23 05:34:02.518 [info] <0.218.0> Running boot step rabbit_vhost_limit defined by app rabbit
2019-07-23 05:34:02.518 [info] <0.218.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management
2019-07-23 05:34:02.518 [info] <0.218.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent
2019-07-23 05:34:02.518 [info] <0.218.0> Management plugin: using rates mode 'basic'
2019-07-23 05:34:02.519 [info] <0.218.0> Running boot step recovery defined by app rabbit
2019-07-23 05:34:02.520 [info] <0.218.0> Running boot step load_definitions defined by app rabbitmq_management
2019-07-23 05:34:02.520 [info] <0.218.0> Running boot step empty_db_check defined by app rabbit
2019-07-23 05:34:02.520 [info] <0.218.0> Adding vhost '/'
2019-07-23 05:34:02.558 [info] <0.443.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@a9bf56e20b16/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
2019-07-23 05:34:02.564 [info] <0.443.0> Starting message stores for vhost '/'
2019-07-23 05:34:02.565 [info] <0.447.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2019-07-23 05:34:02.566 [info] <0.443.0> Started message store of type transient for vhost '/'
2019-07-23 05:34:02.566 [info] <0.450.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2019-07-23 05:34:02.567 [warning] <0.450.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
2019-07-23 05:34:02.568 [info] <0.443.0> Started message store of type persistent for vhost '/'
2019-07-23 05:34:02.570 [info] <0.218.0> Creating user 'guest'
2019-07-23 05:34:02.576 [info] <0.218.0> Setting user tags for user 'guest' to [administrator]
2019-07-23 05:34:02.581 [info] <0.218.0> Setting permissions for 'guest' in '/' to '.*', '.*', '.*'
2019-07-23 05:34:02.586 [info] <0.218.0> Running boot step rabbit_looking_glass defined by app rabbit
2019-07-23 05:34:02.586 [info] <0.218.0> Running boot step rabbit_core_metrics_gc defined by app rabbit
2019-07-23 05:34:02.586 [info] <0.218.0> Running boot step background_gc defined by app rabbit
2019-07-23 05:34:02.586 [info] <0.218.0> Running boot step connection_tracking defined by app rabbit
2019-07-23 05:34:02.592 [info] <0.218.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit@a9bf56e20b16
2019-07-23 05:34:02.598 [info] <0.218.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit@a9bf56e20b16
2019-07-23 05:34:02.598 [info] <0.218.0> Running boot step routing_ready defined by app rabbit
2019-07-23 05:34:02.598 [info] <0.218.0> Running boot step pre_flight defined by app rabbit
2019-07-23 05:34:02.599 [info] <0.218.0> Running boot step notify_cluster defined by app rabbit
2019-07-23 05:34:02.599 [info] <0.218.0> Running boot step networking defined by app rabbit
2019-07-23 05:34:02.602 [warning] <0.482.0> Setting Ranch options together with socket options is deprecated. Please use the new map syntax that allows specifying socket options separately from other options.
2019-07-23 05:34:02.602 [info] <0.496.0> started TCP listener on [::]:5672
2019-07-23 05:34:02.603 [info] <0.218.0> Running boot step direct_client defined by app rabbit
2019-07-23 05:34:02.634 [info] <0.556.0> Management plugin: HTTP (non-TLS) listener started on port 15672
2019-07-23 05:34:02.634 [info] <0.662.0> Statistics database started.
2019-07-23 05:34:02.634 [info] <0.661.0> Starting worker pool 'management_worker_pool' with 3 processes in it
 completed with 3 plugins.
2019-07-23 05:34:02.764 [info] <0.8.0> Server startup complete; 3 plugins started.
 * rabbitmq_management
 * rabbitmq_web_dispatch
 * rabbitmq_management_agent
Stopping and halting node rabbit@a9bf56e20b16 ...
2019-07-23 05:34:31.944 [info] <0.677.0> RabbitMQ is asked to stop...
2019-07-23 05:34:31.981 [info] <0.677.0> Stopping RabbitMQ applications and their dependencies in the following order:
    rabbitmq_management
    amqp_client
    rabbitmq_web_dispatch
    cowboy
    cowlib
    rabbitmq_management_agent
    rabbit
    mnesia
    rabbit_common
    sysmon_handler
    os_mon
2019-07-23 05:34:31.981 [info] <0.677.0> Stopping application 'rabbitmq_management'
2019-07-23 05:34:31.983 [warning] <0.552.0> RabbitMQ HTTP listener registry could not find context rabbitmq_management_tls
2019-07-23 05:34:31.984 [info] <0.43.0> Application rabbitmq_management exited with reason: stopped
2019-07-23 05:34:31.984 [info] <0.677.0> Stopping application 'amqp_client'
2019-07-23 05:34:31.986 [info] <0.43.0> Application amqp_client exited with reason: stopped
2019-07-23 05:34:31.986 [info] <0.677.0> Stopping application 'rabbitmq_web_dispatch'
2019-07-23 05:34:31.987 [info] <0.43.0> Application rabbitmq_web_dispatch exited with reason: stopped
2019-07-23 05:34:31.987 [info] <0.677.0> Stopping application 'cowboy'
2019-07-23 05:34:31.989 [info] <0.43.0> Application cowboy exited with reason: stopped
2019-07-23 05:34:31.989 [info] <0.677.0> Stopping application 'cowlib'
2019-07-23 05:34:31.989 [info] <0.43.0> Application cowlib exited with reason: stopped
2019-07-23 05:34:31.989 [info] <0.677.0> Stopping application 'rabbitmq_management_agent'
2019-07-23 05:34:31.991 [info] <0.43.0> Application rabbitmq_management_agent exited with reason: stopped
2019-07-23 05:34:31.991 [info] <0.677.0> Stopping application 'rabbit'
2019-07-23 05:34:31.991 [info] <0.218.0> Peer discovery backend rabbit_peer_discovery_classic_config does not support registration, skipping unregistration.
2019-07-23 05:34:31.992 [info] <0.496.0> stopped TCP listener on [::]:5672
2019-07-23 05:34:31.993 [info] <0.423.0> Closing all connections in vhost '/' on node 'rabbit@a9bf56e20b16' because the vhost is stopping
2019-07-23 05:34:31.993 [info] <0.450.0> Stopping message store for directory '/var/lib/rabbitmq/mnesia/rabbit@a9bf56e20b16/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent'
2019-07-23 05:34:32.001 [info] <0.450.0> Message store for directory '/var/lib/rabbitmq/mnesia/rabbit@a9bf56e20b16/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent' is stopped
2019-07-23 05:34:32.001 [info] <0.447.0> Stopping message store for directory '/var/lib/rabbitmq/mnesia/rabbit@a9bf56e20b16/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient'
2019-07-23 05:34:32.006 [info] <0.447.0> Message store for directory '/var/lib/rabbitmq/mnesia/rabbit@a9bf56e20b16/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient' is stopped
2019-07-23 05:34:32.009 [info] <0.677.0> Stopping application 'mnesia'
2019-07-23 05:34:32.009 [info] <0.43.0> Application rabbit exited with reason: stopped
2019-07-23 05:34:32.012 [info] <0.677.0> Stopping application 'rabbit_common'
2019-07-23 05:34:32.012 [info] <0.43.0> Application mnesia exited with reason: stopped
2019-07-23 05:34:32.012 [info] <0.677.0> Stopping application 'sysmon_handler'
2019-07-23 05:34:32.012 [info] <0.43.0> Application rabbit_common exited with reason: stopped
2019-07-23 05:34:32.014 [info] <0.43.0> Application sysmon_handler exited with reason: stopped
2019-07-23 05:34:32.014 [info] <0.677.0> Stopping application 'os_mon'
2019-07-23 05:34:32.015 [info] <0.677.0> Successfully stopped RabbitMQ and its dependencies
2019-07-23 05:34:32.015 [info] <0.43.0> Application os_mon exited with reason: stopped
Gracefully halting Erlang VM
2019-07-23 05:34:32.016 [info] <0.677.0> Halting Erlang VM with the following applications:
    ranch
    ssl
    public_key
    asn1
    crypto
    observer_cli
    recon
    inets
    jsx
    xmerl
    lager
    goldrush
    compiler
    syntax_tools
    sasl
    stdlib
    kernel```

Any help's appreciated.

Перейти к ответу
Данный вопрос помечен как решенный


Ответы
2

Из вашего сообщения я понимаю, что вы хотите получить доступ к консоли управления RabbitMQ. Если это действительно так, вы используете неправильное изображение. Правильное изображение rabbitmq:management. Также внутренний порт 15672, а не 15762.

Вот правильная версия для тестирования консоли управления:

version: '3'
services:
  rabbitmq:
    image: "rabbitmq:management"
    ports:
     - "15672:15672"
     - "5672:5672"

Затем вы можете перейти к http://localhost:15672

Эта работа для меня:

Docker-compose.yml файл

version: "3.8"

services:
  rabbitmq:
    container_name: rabbitmq
    image: "rabbitmq:3.8.9-management"
    volumes:
      - ./.docker/rabbitmq/etc/:/etc/rabbitmq/
      - ./.docker/rabbitmq/data/:/var/lib/rabbitmq/
      - ./.docker/rabbitmq/logs/:/var/log/rabbitmq/
    environment:
      RABBITMQ_ERLANG_COOKIE: secret-cookie
      RABBITMQ_DEFAULT_USER: rabbituser
      RABBITMQ_DEFAULT_PASS: password
      RABBITMQ_DEFAULT_VHOST: /rabbit-vh
    
    ports:
      - 5672:5672
      - 15672:15672

Получите доступ к докер-контейнеру:

docker exec -it rabbitmq /bin/bash

# then enable the management plugin
# https://www.rabbitmq.com/management.html#getting-started

rabbitmq-plugins enable rabbitmq_management
Enabling plugins on node rabbit@592e772911d9:
rabbitmq_management
The following plugins have been configured:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch
Applying plugin configuration to rabbit@592e772911d9...
The following plugins have been enabled:
  rabbitmq_management
  rabbitmq_management_agent
  rabbitmq_web_dispatch

started 3 plugins.

В конце концов, я могу получить доступ к http://localhost:15672/ в обычном режиме.

Другие вопросы по теме

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Error unsupported pyinstaller version or not a pyinstaller archive
  • Error while waiting for device timed out after 300seconds waiting for emulator to come online
  • Error while trying to retrieve text for error ora 01019 oracle
  • Error while starting in multiplayer mode subnautica что делать
  • Error unsupported protocol при печати

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии