Error timed out waiting for any update progress to be made

Description Xingxing Xia 2016-03-03 03:26:01 UTC


Description


Xingxing Xia



2016-03-03 03:26:01 UTC

Created attachment 1132617 [details]
hooks-1-deploy.yaml

Description of problem:
When updating deployment config with changed components, it always get error "timed out waiting for any update progress to be made" when time arrives beyond 2 min.
Only reproduced in OSE/AEP. Works in Origin

Version-Release number of selected component (if applicable):
openshift/oc v3.1.1.908
kubernetes v1.2.0-alpha.7-703-gbc4550d
etcd 2.2.5

How reproducible:
Always

Steps to Reproduce:
1.
$ oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/deployment1.json
deploymentconfig "hooks" created
$ oc get pod hooks-1-deploy -o yaml > hooks-1-deploy.yaml
$ oc get pod
NAME            READY     STATUS    RESTARTS   AGE
hooks-1-wl8j2   1/1       Running   0          1m

2. Update
$ oc replace -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/updatev1.json
deploymentconfig "hooks" replaced
$ oc get pod hooks-2-deploy -o yaml > hooks-2-deploy.yaml
$ oc get pod
NAME             READY     STATUS    RESTARTS   AGE
hooks-1-wl8j2    1/1       Running   0          3m
hooks-2-5cdy7    1/1       Running   0          1m
hooks-2-deploy   1/1       Running   0          2m
$ oc get pod
NAME             READY     STATUS    RESTARTS   AGE
hooks-1-wl8j2    1/1       Running   0          3m
hooks-2-5cdy7    1/1       Running   0          2m
hooks-2-deploy   0/1       Error     0          2m

3. Check logs
$ oc logs pod/hooks-2-deploy
I0302 21:25:57.711799       1 deployer.go:201] Deploying from xxia-proj/hooks-1 to xxia-proj/hooks-2 (replicas: 2)
I0302 21:25:58.788683       1 rolling.go:228] RollingUpdater: Continuing update with existing controller hooks-2.
I0302 21:25:58.810908       1 rolling.go:228] RollingUpdater: Scaling up hooks-2 from 0 to 2, scaling down hooks-1 from 1 to 0 (keep 2 pods available, don't exceed 2 pods)
I0302 21:25:58.810933       1 rolling.go:228] RollingUpdater: Scaling hooks-2 up to 1
F0302 21:28:02.181326       1 deployer.go:69] timed out waiting for any update progress to be made

4.
$ oc get pod hooks-2-deploy -o yaml > hooks-2-deploy_Error.yaml

Actual results:
2. hooks-2-deploy changes from Running to Error when time arrives to 2 min.
3. Error message: timed out waiting ...
Note: 21:28:02 - 21:25:57 is about 2 min

Expected results:
2 and 3: hooks-2-deploy should succeed finally, as in Origin instance.

Additional info:
See the attached YAML files. Not sure if openshift3/ose-deployer:v3.1.1.908 has problem.


Comment 3


Paul Weil



2016-03-03 13:55:47 UTC

Usually a deployment failing to make progress can be attributed to the new deployment's pods failing in some way.  For instance this can happen if you have an invalid probe and the pod never becomes ready and your DC is configured to ensure you do not have down time.  Can you take a look at the pods the second deployment is creating and post any logs from them as well?


Comment 4


Xingxing Xia



2016-03-04 04:41:20 UTC

(In reply to Paul Weil from comment #3)
> configured to ensure you do not have down time.  Can you take a look at the
> pods the second deployment is creating and post any logs from them as well?

The 2nd deployment's "replicas" is 2. From the above result, one of the replica runs well (i.e. hooks-2-5cdy7    1/1       Running   0          2m). But the other replica never come out.

More interesting:
Download updatev1.json, we find spec.strategy.rollingParams.timeoutSeconds is 120 (i.e. 2 min). Change it to longer value 300, test again, we get:
$ wget https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/deployment1.json   https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/updatev1.json

$ # Change updatev1.json timeoutSeconds value to 300

$ diff deployment1.json updatev1.json
13c13
<             "type": "Recreate",
---
>             "type": "Rolling",
45a46,50
>             "rollingParams": {
>                 "updatePeriodSeconds": 1,
>                 "intervalSeconds": 1,
>                 "timeoutSeconds": 300
>             },
50c55,65
<                 "type": "ConfigChange"
---
>                 "type": "ImageChange",
>                 "imageChangeParams": {
>                     "automatic": true,
>                     "containerNames": [
>                         "mysql-container"
>                     ],
>                     "from": {
>                         "kind": "ImageStreamTag",
>                         "name": "mysql-55-centos7:latest"
>                     }
>                 }
53c68
<         "replicas": 1,
---
>         "replicas": 2,
82c97
<                                 "value": "Plqe5Wev"
---
>                                 "value": "Plqe5Wevchange"
103c118,120
<     "status": {}
---
>     "status": {
>         "latestVersion": 2
>    }

$ oc create -f deployment1.json # Then wait for deployment to complete

$ $ oc get pod
NAME            READY     STATUS    RESTARTS   AGE
hooks-1-b87q6   1/1       Running   0          57s

$ oc replace -f updatev1.json 
deploymentconfig "hooks" replaced

$ oc get pod # *Note*: One replica of the 2nd deployment is Running in short time 30s
NAME             READY     STATUS    RESTARTS   AGE
hooks-1-b87q6    1/1       Running   0          1m
hooks-2-deploy   1/1       Running   0          41s
hooks-2-u7duf    1/1       Running   0          30s

# Continue checking pod several times. *Note*: For the 2nd deployment, one replica comes out shortly but the other never come out
NAME             READY     STATUS    RESTARTS   AGE
hooks-1-b87q6    1/1       Running   0          5m
hooks-2-deploy   1/1       Running   0          4m
hooks-2-u7duf    1/1       Running   0          4m

# Wait about 300 sec  # *Note*: hooks-2-u7duf disappears! But hooks-1-5ex89 comes out!
$ oc get pod
NAME             READY     STATUS    RESTARTS   AGE
hooks-1-5ex89    1/1       Running   0          23s   <-- new born pod
hooks-1-b87q6    1/1       Running   0          7m
hooks-2-deploy   0/1       Error     0          6m

$ oc logs pod/hooks-2-deploy
I0303 22:04:48.424121       1 deployer.go:201] Deploying from xxia-proj/hooks-1 to xxia-proj/hooks-2 (replicas: 2)
I0303 22:04:49.603613       1 rolling.go:228] RollingUpdater: Continuing update with existing controller hooks-2.
I0303 22:04:49.640004       1 rolling.go:228] RollingUpdater: Scaling up hooks-2 from 0 to 2, scaling down hooks-1 from 1 to 0 (keep 2 pods available, don't exceed 2 pods)
I0303 22:04:49.640024       1 rolling.go:228] RollingUpdater: Scaling hooks-2 up to 1
F0303 22:09:53.752335       1 deployer.go:69] timed out waiting for any update progress to be made
*Note*: The bug is not avoided by making timeoutSeconds longer

$ oc get pod hooks-1-5ex89 hooks-1-b87q6 -o yaml | grep MYSQL_PASSWORD -A 1
      - name: MYSQL_PASSWORD
        value: Plqe5Wev
--
      - name: MYSQL_PASSWORD
        value: Plqe5Wev
*Note*: Unexpectedly pod hooks-1-5ex89 born in the 2nd deployment uses pod template of the 1st deployment! (See the above diff)

I made several "*Note*" deserving attention. In sum: bug is reproduced; find new issue (unexpected born pod).


Comment 5


Xingxing Xia



2016-03-04 04:45:22 UTC

You can reproduce it and get the logs you need.


Comment 6


Michail Kargakis



2016-03-04 11:29:18 UTC

"RollingUpdater: Scaling up hooks-2 from 0 to 2, scaling down hooks-1 from 1 to 0 (keep 2 pods available, don't exceed 2 pods)"

This is the reason why your deployment never proceeeds:
First it scales up the new rc: 0/2 -> 1/2
Old rc is 1/1. Total pods are 2. The deployment process cannot scale up or down anymore. Blocked.

Actually, why did you decide to change from a recreate strategy to a rolling and also bump the replica size at the same time? Is there any specific reason?

maxSurge=1 and maxUnavailable=1 are set in the second deployment (you haven't specified them so they are set by default). Your new desired size is 2, so this should allow you to scale from a range of 1 pod available up to not exceeding 3 pods.

Looking into it...


Comment 9


Xingxing Xia



2016-03-07 05:33:54 UTC

Yeah, "keep 2 pods available, don't exceed 2 pods" is the reason.
But testing against Origin instance (openshift/oc v1.1.3-483-g28cba69), the result is "keep 1 pods available", thus everything works well:
$ oc logs pod/hooks-2-deploy
I0307 05:17:28.149138       1 deployer.go:201] Deploying from xxia-proj/hooks-1 to xxia-proj/hooks-2 (replicas: 2)
I0307 05:17:29.205347       1 rolling.go:228] RollingUpdater: Continuing update with existing controller hooks-2.
I0307 05:17:29.219796       1 rolling.go:228] RollingUpdater: Scaling up hooks-2 from 0 to 2, scaling down hooks-1 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
I0307 05:17:29.219811       1 rolling.go:228] RollingUpdater: Scaling hooks-2 up to 1
I0307 05:17:33.263548       1 rolling.go:228] RollingUpdater: Scaling hooks-1 down to 0
I0307 05:17:35.296335       1 rolling.go:228] RollingUpdater: Scaling hooks-2 up to 2

In both tests against Origin and OSE, `oc get dc hooks -o yaml > hooks-2.yaml` gets:
      maxSurge: 25%
      maxUnavailable: 25%
Seems Origin treats (replicas) 2 * 25% as 1, while OSE treats 2 * 25% as 0 ?

BTW: There is another issue, "hooks-2-u7duf disappears! But hooks-1-5ex89 comes out!" in comment 4, isn't it also a problem?


Comment 12


Xingxing Xia



2016-03-08 04:49:02 UTC

The code is not merged in OSE yet. Will verify using OSE when merged.

Michail, in fact, the bug was not reproduced in Origin. It was only reproduced in OSE/AEP. Did you notice this?

Even use the latest Origin version (openshift/oc v1.1.3-500-g115660e
kubernetes v1.2.0-alpha.7-703-gbc4550d) that includes the fix (see PS) https://github.com/openshift/origin/pull/7828, it seems issue still exists. See below:
(This time I modify replicas to 4 in updatev1.json for easy check)
$ oc create -f deployment1.json # and wait for deployment to complete
$ oc replace -f updatev1.json   # replicas is modified to 4
$ oc logs pod/hooks-2-deploy
I0308 03:14:37.391134       1 deployer.go:201] Deploying from xxia-proj/hooks-1 to xxia-proj/hooks-2 (replicas: 4)
I0308 03:14:38.420383       1 rolling.go:228] RollingUpdater: Continuing update with existing controller hooks-2.
I0308 03:14:38.432619       1 rolling.go:228] RollingUpdater: Scaling up hooks-2 from 0 to 4, scaling down hooks-1 from 1 to 0 (keep 3 pods available, don't exceed 2 pods)
I0308 03:14:38.432635       1 rolling.go:228] RollingUpdater: Scaling hooks-2 up to 1
F0308 03:16:43.139926       1 deployer.go:69] timed out waiting for any update progress to be made

When testing with Origin version before the fix (v1.1.3-483-g28cba69, see PS), it also gets "keep 3 pods available, don't exceed 2 pods".
https://github.com/openshift/origin/pull/7828 does not take effect?

PS:
v1.1.3-500-g115660e includes #7828.
$ git log --pretty="%h %cd %cn %s" --date=local 115660e | grep '#7828'
115660e Tue Mar 8 00:53:28 2016 OpenShift Bot Merge pull request #7828 from kargakis/another-rolling-updater-fix

v1.1.3-483-g28cba69 does not includes #7828
$ git log --pretty="%h %cd %cn %s" --date=local 28cba69 | grep '#7828'
$


Comment 13


Michail Kargakis



2016-03-08 09:46:39 UTC

Did you rebuilt the deployer image? It doesn't seem that you are running with my code changes....

"keep 3 pods available, don't exceed 2 pods"

You should see "keep 3 pods available, don't exceed 5 pods"


Comment 14


Xingxing Xia



2016-03-08 10:23:20 UTC

Oh, was not aware of needing rebuilt deployer image.
Now in Origin, verified with:
openshift/origin-deployer                         latest              5440034cfef3        15 hours ago        522.5 MB

It works well. Thank you!
Will verify OSE when #7828 is merged


Comment 15


Troy Dawson



2016-03-09 20:39:05 UTC

Should be in v3.2.0.1 built today.


Comment 16


Xingxing Xia



2016-03-10 07:14:33 UTC

Verified with openshift/oc v3.2.0.1, kubernetes v1.2.0-alpha.7-703-gbc4550d, and image: openshift3/ose-deployer   v3.2.0.1  e5070cbcc689   12 hours ago

The bug is fixed, the 2nd deployment can complete successfully.


Comment 18


errata-xmlrpc



2016-05-12 16:31:14 UTC

Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2016:1064

This is the command that was run along with the output below.
For clarity, the new rc is:
django-hash-3059e3853a8b8e954cdab63d6b364713f1d8a652

The old rc is:
django-hash-daf9e4eeac8ea0b32ed53d5c991474b2c1140db

Found an existing replication controller for app: django
Hashes don't match. Existing: {u'image_hash': u'hash_daf9e4eeac8ea0b32ed53d5c991474b2c1140dba'}, New: {'image_hash': 'hash_3059e3853a8b8e954cdab63d6b364713f1d8a652'}
Performing rolling update
Traceback (most recent call last):
  File "/var/lib/jenkins/jobs/collectr_deploy/workspace/production/kubernetes/backend_stack/deploy_scripts/update_kubernetes_replication_controllers.py", line 70, in <module>
    "--namespace=%s" % args.namespace])
  File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['kubectl', 'rolling-update', u'django-hash-daf9e4eeac8ea0b32ed53d5c991474b2c1140dba', '-f', '/home/ubuntu/kubernetes_3210706b36bf6dbdbacd7fb308ed635231df1da6/django/django-rc.yaml', '--namespace=sandbox']' returned non-zero exit status 1
stdout: Created django-hash-3059e3853a8b8e954cdab63d6b364713f1d8a652
Scaling up django-hash-3059e3853a8b8e954cdab63d6b364713f1d8a652 from 0 to 3, scaling down django-hash-daf9e4eeac8ea0b32ed53d5c991474b2c1140dba from 3 to 0 (keep 3 pods available, don't exceed 4 pods)
Scaling django-hash-3059e3853a8b8e954cdab63d6b364713f1d8a652 up to 1
Scaling django-hash-daf9e4eeac8ea0b32ed53d5c991474b2c1140dba down to 2
Scaling django-hash-3059e3853a8b8e954cdab63d6b364713f1d8a652 up to 2
stderr: error: timed out waiting for any update progress to be made

Aftewards, it leaves behind 2 replication controllers:

> kubectl get rc --namespace=sandbox
CONTROLLER                                                   CONTAINER(S)   IMAGE(S)                                                                                               SELECTOR                                                                    REPLICAS   AGE
django-hash-3059e3853a8b8e954cdab63d6b364713f1d8a652         django         snipped:5000/django:hash_3059e3853a8b8e954cdab63d6b364713f1d8a652          app=django,image_hash=hash_3059e3853a8b8e954cdab63d6b364713f1d8a652         2          11h
django-hash-daf9e4eeac8ea0b32ed53d5c991474b2c1140dba         django         snipped:5000/django:hash_daf9e4eeac8ea0b32ed53d5c991474b2c1140dba          app=django,image_hash=hash_daf9e4eeac8ea0b32ed53d5c991474b2c1140dba         2          15h

Could this be related to: #8676 ?
Looking at the kubectl get events —namespace=sandbox, ~10 hours or so after it occurred didn’t show me anything. We didn’t get to look at it earlier because we only discovered it in the morning.
I resolved the issue by deleting the old replication controller manually and rescaling the new replication controller manually to 3.

This is my kubectl version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-ta-1", GitCommit:"bd56609691112cf026f77d017e89b771635bfdd6", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-ta-1", GitCommit:"bd56609691112cf026f77d017e89b771635bfdd6", GitTreeState:"clean"}

The binaries are built off of this repo:
https://github.com/thesamet/kubernetes/tree/v1.1.0-ta-1

Skip to content



Open


Issue created Feb 05, 2019 by Johannes Hugo Kitschke@johanneshk

«Error: UPGRADE FAILED: timed out waiting for the condition» when trying to update

Summary

I’m trying to update my gitlab from chart version 1.3.3 to 1.5.1.

Steps to reproduce

Run ‘helm upgrade gitlab gitlab/gitlab —version 1.5.1 -f gitlab.yaml’

Configuration used

certmanager-issuer:                                                                            
  email: XXX
global:
  email:
    display_name: XXXX
    from: XXX
    reply_to: XXX
    subject_suffix: XXX
  hosts:
    domain: XXX
    externalIP: XXX
  psql:
    host: XXX
    password:
      key: password
      secret: postgres-password
    username: XXX
  redis:
    host: XXX
    password:
      key: key
      secret: redis-access-key
  smtp:
    address: XXX
    authentication: login
    enabled: true
    openssl_verify_mode: none
    password:
      key: password
      secret: email-password
    port: 587
    starttls_auto: true
    user_name: XXX
postgresql:
  install: false
redis:
  enabled: false

Current behavior

Update fails with ‘Error: UPGRADE FAILED: timed out waiting for the condition’. Also after trying several times I see a lot of pending ‘shared secrets’ pods (I tried updating 4 days ago and today…). I also specified a longer timeout with ‘—timeout 1200’, but that didn’t help.

NAME                                                   READY   STATUS      RESTARTS   AGE
cm-acme-http-solver-cv89h                              1/1     Running     0          3h13m
cm-acme-http-solver-fpn9z                              1/1     Running     0          3h13m
gitlab-certmanager-664f847794-lhztl                    1/1     Running     6          62d
gitlab-gitaly-0                                        1/1     Running     1          54d
gitlab-gitlab-runner-5d4bf868d-lfw75                   1/1     Running     7          62d
gitlab-gitlab-shell-5f66f76d6f-5cn4d                   1/1     Running     1          54d
gitlab-gitlab-shell-5f66f76d6f-bn6wk                   1/1     Running     1          54d
gitlab-migrations.3-r29f9                              0/1     Completed   0          54d
gitlab-minio-8dc7f5964-949cf                           1/1     Running     1          62d
gitlab-minio-create-buckets.3-658d8                    0/1     Completed   0          54d
gitlab-nginx-ingress-controller-688b48c456-g9hp9       1/1     Running     2          62d
gitlab-nginx-ingress-controller-688b48c456-gdzln       1/1     Running     2          62d
gitlab-nginx-ingress-controller-688b48c456-zd8nt       1/1     Running     2          62d
gitlab-nginx-ingress-default-backend-cb9857f68-4mw6n   1/1     Running     1          62d
gitlab-nginx-ingress-default-backend-cb9857f68-wr47c   1/1     Running     1          62d
gitlab-prometheus-server-8cf4fdd8-78s98                2/2     Running     2          62d
gitlab-registry-5779d776d6-28zd5                       1/1     Running     1          62d
gitlab-registry-5779d776d6-lccgw                       1/1     Running     1          62d
gitlab-shared-secrets.10-xlp-rt9jc                     0/1     Pending     0          4d2h
gitlab-shared-secrets.11-yeh-drmzf                     0/1     Pending     0          4h9m
gitlab-shared-secrets.4-p37-854zc                      0/1     Pending     0          4d4h
gitlab-shared-secrets.5-trl-9n8xd                      0/1     Pending     0          4d4h
gitlab-shared-secrets.6-3ld-mkgjv                      0/1     Pending     0          4d4h
gitlab-shared-secrets.7-uep-ft84v                      0/1     Pending     0          4d3h
gitlab-shared-secrets.8-v6a-wp8ts                      0/1     Pending     0          4d3h
gitlab-shared-secrets.9-qk7-cdh9t                      0/1     Pending     0          4d3h
gitlab-sidekiq-all-in-1-6898654bff-5mdp2               0/1     Pending     0          54d
gitlab-sidekiq-all-in-1-7895845f74-6j47n               1/1     Running     1          62d
gitlab-task-runner-cfff49db8-g5z26                     1/1     Running     1          54d
gitlab-unicorn-58999cbf64-f99st                        0/2     Pending     0          54d
gitlab-unicorn-fd847cc9-kdhqq                          2/2     Running     2          62d
gitlab-unicorn-fd847cc9-mmq6w                          0/2     Pending     0          62d

Expected behavior

Update works.

Versions

  • Chart: 1.3.3

  • Platform:

    • Cloud: AKS
  • Kubernetes: (kubectl version)

    • Client: Client Version: version.Info{Major:»1″, Minor:»12″, GitVersion:»v1.12.3″, GitCommit:»435f92c719f279a3a67808c80521ea17d5715c66″, GitTreeState:»clean», BuildDate:»2018-11-26T12:57:14Z», GoVersion:»go1.10.4″, Compiler:»gc», Platform:»linux/amd64″}
    • Server:Server Version: version.Info{Major:»1″, Minor:»9″, GitVersion:»v1.9.11″, GitCommit:»1bfeeb6f212135a22dc787b73e1980e5bccef13d», GitTreeState:»clean», BuildDate:»2018-09-28T21:35:22Z», GoVersion:»go1.9.3″, Compiler:»gc», Platform:»linux/amd64″}
  • Helm: (helm version)

    • Client:Client: &version.Version{SemVer:»v2.11.0″, GitCommit:»2e55dbe1fdb5fdb96b75ff144a339489417b146b», GitTreeState:»clean»}
    • Server:Server: &version.Version{SemVer:»v2.11.0″, GitCommit:»2e55dbe1fdb5fdb96b75ff144a339489417b146b», GitTreeState:»clean»}

Relevant logs

Which logs would help debugging?

Thank you!

  • Команда форума
  • #1

Операционная система

Linux

Текст ошибки

ERROR! Timed out waiting for AppInfo update.

Список мета-модулей
Список плагинов SM
Список расширений SM

Всем привет. После обновы в начале октября у меня сервер стал срать ошибку ERROR! Timed out waiting for AppInfo update при обновлении и валидации.
Помогает только постоянное удаление appinfo.vdf и packageinfo.vdf в steamcmd/appcache/
Я уже переустановил полностью сервер (включая сам LGSM) его тоже с нуля устанавливал, это не решает проблему.
Я не знаю с чем это связано, но с каждой новой обновой, мне приходится так копать мозг себе.

  • #2

Оффтоп

appinfo.vdf и packageinfo.vdf в steamcmd/appcache/

У тебя только один сервер на тачке?

  • Команда форума
  • #3

У тебя только один сервер на тачке?

ага

  • #5

Подтверждаю. Тоже использую LGSM для нескольких серверов и это уже третья обнова, после которой снова приходится выпиливать файлы и запускать force-update.
Странно что на гитхабе нет упоминания об этом баге — можем написать, а другие подтвердят

  • #6

в интернете пишут, что эта ерунда началась с еще Debian 9.
на Debian 10 эти танцы также присутствуют, так как сам стоокнулся. Решения не нашел.
Именно из-за этой ошибки вернулся обратно с Debian 10 на Debian 8. Так как на Debian 8 по сей момент нормально и такой ошибки не возникало ни разу.

  • Команда форума
  • #7

в интернете пишут, что эта ерунда началась с еще Debian 9.
на Debian 10 эти танцы также присутствуют, так как сам стоокнулся. Решения не нашел.
Именно из-за этой ошибки вернулся обратно с Debian 10 на Debian 8. Так как на Debian 8 по сей момент нормально и такой ошибки не возникало ни разу.

ну деб9 вряд ли, а вот 10 мб

  • Команда форума
  • #8

Проблему решил переустановкой OC (причем на ту же самую deb 10)

Пробовал перекинуть файлы с рабочего сервера — это не помогало.

  • #9

Проблему решил переустановкой OC (причем на ту же самую deb 10)

Пробовал перекинуть файлы с рабочего сервера — это не помогало.

Странно, вы же сами писали в первом посте «Я уже переустановил полностью сервер (включая сам LGSM) его тоже с нуля устанавливал, это не решает проблему.».
Вот и говорю, странно, что один раз переустановили — не помогло, а тут вдруг помогло.
Я тоже переустанавливал, мне не помогало. Причем я обнаружил, что ошибка плавающая, то появляется, то нет…
Отпишитесь через несколько дней, все ли у вас работает

  • Команда форума
  • #10

Странно, вы же сами писали в первом посте «Я уже переустановил полностью сервер (включая сам LGSM) его тоже с нуля устанавливал, это не решает проблему.».
Вот и говорю, странно, что один раз переустановили — не помогло, а тут вдруг помогло.
Я тоже переустанавливал, мне не помогало. Причем я обнаружил, что ошибка плавающая, то появляется, то нет…
Отпишитесь через несколько дней, все ли у вас работает

отличайте переустановку сервера от переустановки OC

  • #11

отличайте переустановку сервера от переустановки OC

все) понял. я почему то решил, что вы в первом посте писали о переустановке ОС.
Но в любом случае, протестируйте и отпишитесь через несколько дней.
Может и я тогда вернусь на Debian 10 =)

  • #12

Проблему решил переустановкой OC (причем на ту же самую deb 10)

Пробовал перекинуть файлы с рабочего сервера — это не помогало.

Как у вас после переустановки ОС, проблема больше не возникала?

  • Команда форума
  • #13

Как у вас после переустановки ОС, проблема больше не возникала?

не возникала

Понравилась статья? Поделить с друзьями:
  • Error time trio theme
  • Error tiles gif is missing or corrupted the escapists
  • Error thrown whilst accessing jei internals
  • Error thrown cannot create references to from string offsets
  • Error thrown call to undefined function eregi