Http request sent awaiting response read error connection reset by peer in headers retrying

I searched a lot but no solution helped me with this problem where I get this error constantly. HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers. Retry...

I searched a lot but no solution helped me with this problem where I get this error constantly.

HTTP request sent, awaiting response… Read error (Connection reset
by peer) in headers. Retrying.

I tried the following inputs.

wget url
wget -O url
wget -O url username="user" password="pass" host="host" (something like this)

I am just trying to download html from a secure website page but all the time it shows the error. So I tried to download any web page but still not working. Is it any server configuration problem?

asked Mar 28, 2018 at 19:53

GL Yusuf's user avatar

6

This error can occur if you access a website via HTTP but it’s trying to redirect you to HTTPS

So if your command was

wget http://url

Try changing it to

wget https://url

answered Mar 11, 2021 at 5:43

congusbongus's user avatar

congusbonguscongusbongus

12.8k6 gold badges70 silver badges96 bronze badges

I encountered a similar issue today, our IT team suggests to use «https» over «http» in the url and use «wget —no-check-certificate», it worked for me.

Websites may stop serving the unencrypted http transfer at some point, which might lead to the issue.

answered Jun 14, 2022 at 16:24

hukai916's user avatar

This following command works form me.

wget -O test.html http://url —auth-no-challenge —force-directories

answered Mar 29, 2018 at 8:36

GL Yusuf's user avatar

GL YusufGL Yusuf

1122 gold badges2 silver badges9 bronze badges

try with «sudo» privilege, it worked for me.

sudo wget url

Waldi's user avatar

Waldi

38.6k6 gold badges26 silver badges75 bronze badges

answered Jun 9, 2022 at 7:15

MELVIN KURIAKOSE's user avatar

I searched a lot but no solution helped me with this problem where I get this error constantly.

HTTP request sent, awaiting response… Read error (Connection reset
by peer) in headers. Retrying.

I tried the following inputs.

wget url
wget -O url
wget -O url username="user" password="pass" host="host" (something like this)

I am just trying to download html from a secure website page but all the time it shows the error. So I tried to download any web page but still not working. Is it any server configuration problem?

asked Mar 28, 2018 at 19:53

GL Yusuf's user avatar

6

This error can occur if you access a website via HTTP but it’s trying to redirect you to HTTPS

So if your command was

wget http://url

Try changing it to

wget https://url

answered Mar 11, 2021 at 5:43

congusbongus's user avatar

congusbonguscongusbongus

12.8k6 gold badges70 silver badges96 bronze badges

I encountered a similar issue today, our IT team suggests to use «https» over «http» in the url and use «wget —no-check-certificate», it worked for me.

Websites may stop serving the unencrypted http transfer at some point, which might lead to the issue.

answered Jun 14, 2022 at 16:24

hukai916's user avatar

This following command works form me.

wget -O test.html http://url —auth-no-challenge —force-directories

answered Mar 29, 2018 at 8:36

GL Yusuf's user avatar

GL YusufGL Yusuf

1122 gold badges2 silver badges9 bronze badges

try with «sudo» privilege, it worked for me.

sudo wget url

Waldi's user avatar

Waldi

38.6k6 gold badges26 silver badges75 bronze badges

answered Jun 9, 2022 at 7:15

MELVIN KURIAKOSE's user avatar

Содержание

  1. Read error (Connection reset by peer) in headers #1129
  2. Comments
  3. Subject of the issue
  4. Your environment
  5. Steps to reproduce
  6. Expected behaviour
  7. Actual behaviour
  8. Docker not publishing ports correctly #27491
  9. Comments
  10. Странные зависания сервера
  11. Ubuntu not letting me download using anything but apt-get
  12. 6 Answers 6
  13. wget errors

Subject of the issue

Read error (Connection reset by peer) in headers.

Your environment

  • Bitwarden_rs version: 1.16.3
  • Install method: docker
  • Clients used: web/curl/wget
  • Reverse proxy and version: NA
  • Version of mysql/postgresql: NA
  • Other relevant information:

Steps to reproduce

Start/install the application with docker:
docker run -d —name bitwarden -e WEBSOCKET_ENABLED=true -e LOG_FILE=/data/bitwarden.log -v /opt/bitwarden/:/data/ -p 32080:80 -p 3012:3012 bitwardenrs/server:latest

Expected behaviour

To be able to load the interface via web browser, or get a html response via wget/curl.

Actual behaviour

docker starts okay:

However trying to connect to the service on the server via IP in browser (http://192.168.2.25:32080/) fails with an «This site can’t be reached. 192.168.2.25 took too long to respond.».

Also accessing via command line directly on the server via wget or curl gets me the connection reset:

Docker appears okay:

No other information appears in the docker log (via docker logs bitwarden) or via the /data/bitwarden.log file. It’s as if the connection doesn’t each the application in docker.

The port chosen (32080) isn’t used for anything else.

Anyone got any ideas? Thanks.

EDIT:
If I run curl within the container, you get the expected response:

Источник

Docker not publishing ports correctly #27491

Since the upgrade to 1.12 if I publish ports on Arch, Fedora or CentOS, I am unable to bind to them. This may be because Docker has stopped setting up masquerading correctly? I’m getting ‘Connection reset by peer’ (see below).

The iptables element of Docker is enabled and no special networks are being used, just the default one. However whenever I try to connect to the ports — nothing.

I’ve searched and so trust I’m not creating a duplicate issue. Happy to provide any and all info needed and run any diagnostics.

]$ wget localhost:10080
—2016-10-18 13:22:17— http://localhost:10080/
Resolving localhost (localhost). 127.0.0.1, ::1
Connecting to localhost (localhost)|127.0.0.1|:10080. connected.
HTTP request sent, awaiting response. Read error (Connection reset by peer) in headers.
Retrying.

—2016-10-18 13:22:18— (try: 2) http://localhost:10080/
Connecting to localhost (localhost)|127.0.0.1|:10080. connected.
HTTP request sent, awaiting response. Read error (Connection reset by peer) in headers.
Retrying.

The text was updated successfully, but these errors were encountered:

Please can you give a full example, and full details of your Docker version and info as requested.

Sure thing, if I were to execute an instance of Ghost:
docker run —name some-ghost -p 8080:2368 -d ghost

Then I am unable to reach the container on port 8080. This is true for web browser or a wget. Also true if I spin up a new docker bridge network such as ‘docker-network’ and then use —net docker-network as a startup argument.

The same is true for essentially any service I try to run (the ones I run on my other servers include nextcloud and gogs, on varying ports). Sometimes (though not always!) if I go directly to the containers IP e.g. 172.18.0.3:8080 I will be served the page. However trying 127.0.0.1:8080 will fail with the Connection reset error.

I’ve tried running with iptables and firewalld fully disabled to remove the idea of the firewall being responsible, but the same issue happens on my personal laptop which isn’t running a firewall daemon. Docker versions are:

Laptop: Docker version 1.12.1, build 23cf638
CentOS Server: Docker version 1.12.2, build bb80604
Fedora Server: Docker version 1.10.3, build 8b7fa4a/1.10.3

Further investigation has found that this is being caused by an ongoing issue with FirewallD/Docker — they do not play nicely! There are several other threads concerning this issue so this one can be closed.

To those who come across this — make sure that iptables is not disabled in Docker Daemon (it is on by default so you’d have had to manually disable it). The IPTables rule configures the containers so that they can reach the outside world, DNS, and also be connected back into. A temporary workaround from a security perspective is to bind containers to specific sockets and allow Docker to do its automated IPtables magic. For example:
docker run —name nginx -p 172.22.14.12:10080:80 nginx’

Источник

Странные зависания сервера

Есть арендованный сервер. На нем стоит Ubuntu 12.04. ISPManager.

Деланье бэекапов начинает в 02:28. После 20-25 минут происходит фриз. Пинги есть, http соединение поднимается

Bind отдает записи.

Вот Картинки с мунина.

Прикол в том, что когда ручками запускаешь бэекапы, все работаеть нормально. А вот вчера эти бэкапы нормально отработали. Все очень странно. В syslog’е никаких записей.

Подскажите, куда копать а то идей нету. Нужна помощь

Кончается память, система уходит в своп?
Бэкапы как делаешь?
Если подключиться к серверу ДО зависания, то что будет во время зависания происходить с ssh-сессией?
Попробуй посмотреть free -m и vmstat, когда оно зависнет. Или прямо перед зависанием. (Munin, это хорошо, но, как видно, он перестает рисовать графики, так что первая идея — адский своппинг).

1. Не могу сказать, так как после того как он завис ничего не могу узнать про сервер. Вот картинка по памяти. http://i036.radikal.ru/1303/68/54c0d41a8f72.png

2. Бэекапы делаются стандартной тулзой ISPManager. /usr/local/ispmgr/sbin/pbackup backup 1 — это в кроне висит

3. 1 раз получилось такое, ssh сессия работала, тоесть пробелыентеры^c^z нажимались, но реакции не было никакой.

4. К сожалению, когда оно зависнет я наврядле смогу к нему подключится, тока если зареннее подрубится и ждать. Завтра попробую.

Какие еще графики нужны?

А ты попробуй раза 3 ручками бэкап запустить. Вдруг зависнет.
Наваяй скрипт, который каждые, скажем, 5 секунд после старта бэкапа будет сливать куда-нибудь, что выдает top, vmstat, free -m и iostat. Тогда что-то может и видно будет. Munin не даст такой точности.

Ставлю на выжирание всей памяти и глухой уход в своп

Я это в первом комментарии написал.

так может вести себя система если iowait очень большой.

рекомендую помониторить iotop -o

Тестить буду ночью. По результатам отпишусь.

А если вырубить своп?

считай мой комментарий «плюсодином» 🙂

Если вырубить своп, то в дело вступит oomkiller и начнет УБИВАТЬ.
При условии, что проблема действительно в нехватке памяти.

Это я вкурсе, поэтому и спрашиваю, стоит ли выключать своп для того чтобы определить это? Предположим процесс захавал всю память, oom его кильнет, все развиснет, и запишет в лог чо и почему он кильнул.

Вопрос в том, имеет ли это смысл?

Если не старшно, что бэкап может получиться поломанным, то определенно стоит так сделать для подтверждения диагноза.

И система из фриза выходит сама или нужно ребутить? Какой объём swap’а? Попробуйте настроить netconsole, чтобы сообщения от ядра шли на другой сервер, может «отпадывает» жёсткий диск.

18Гб размер свапа.

Сервак нужно ребутить.

В логах есть хоть какие-то записи после момента фриза до момента ребута?

Если http и dns сервера работают, значит они в ОЗУ. Значит, если даже памяти не хватает и идёт свопинг, ядро в состоянии вытаскивать нужные процессы из swap. Возможно, что это происходило бы медленно, и по ssh зайти было бы нельзя (из-за таймаута на авторизацию), но открытая ssh-сесия бы работала.

Думаю, что можно даже не netconsole настраивать, а просто отправку логов средствами syslogd по сети.

В логах вообще все чисто. Сразу после момента фриза — начало загрузки ядра

ssh сессия работает, но я подозрвеваю что просто соединение поддерживается, а не ссш работает.

По мне это симптомы «отпадывания» жёсткого диска. Диска нет поэтому в логи ничего записатся не может, и не может ничего прочитатся — выполнится команды по ssh. Либо контроллер, либо диск глючный и при большой нагрузке перестаёт работать как надо.

По мне это симптомы «отпадывания» жёсткого диска.

Источник

Ubuntu not letting me download using anything but apt-get

I’m simply trying to clone a repository from github, but I think my problem spans greater than git or github. I’ve tried the following methods:

Since wget wasn’t working, I figured I’d try using git (knowing that my firewall was probably blocking the git protocol). As you can see, it looks like the firewall did block it.

sudo git clone git://github.com/symfony/symfony.git

Since, the git protocol didn’t work I figured I’d try the http method.

So I figured I’d try another method of getting a file from the internet, but to no avail.

sudo curl www.google.com

So then I tried pinging just to make sure that I see the outside world, and it works!

apt-get install works successfully. Does apt-get use a different protocol other than http or does it use different ports other than 80?

I should mention that I’m using a VMware installation of Kubuntu 10.04. Has anyone ever encountered a problem like this? What other methods could you think of to narrow down where the problem is coming from?

Note: I had to add a single quote (‘) before each hyperlink in this post since I don’t have enough rep.

@uloBasEI What was returned is below (minus the HTML of google’s homepage)

6 Answers 6

In order to know which protocol apt-get is using, you can have a look at the file /etc/apt/sources.list . Packages can be coming from a cd-rom, a ftp server, a http server, etc.

Have you tried opening a webpage with telnet to have a look at the entire HTTP response? For instance:

The problem partly had to do with proxy settings. Now, I had set the proxy environment variable

export http_proxy=»http://IP ADDRESS:PORT»

before with no luck, but after restarting the computer, several other solutions failed because I didn’t add the environment variable again. I was attempting to clone the git repository in /var/local/git. After a long time of not being able to do what I want, I started trying random solutions, and eventually tried cloning the git repository to /home/myusername/git. To my surprise, the repo started downloading! So, what was the difference?

I performed a ls -la on /var/local/git and /home/myusername/git, and the results were

So, it turns out, I guess this is a lesson on permissions. I’m guessing the problem had something to do with the ‘s‘ permission in /var/local/git, but from what I’ve read I cannot tell why that would be the cause of my problems.

Does anyone know if this is a legit answer, or does it sound like something else was at work here?

Источник

wget errors

I’m trying to run wget (GNU Wget 1.8.2) on Solaris 8, as follows:
wget http://www.gnu.org
And I get the following output:
—16:41:56— http://www.gnu.org/
=> `index.html’
Resolving www.gnu.org. done.
Connecting to www.gnu.org[199.232.41.10]:80. connected.
HTTP request sent, awaiting response.
Read error (Connection reset by peer) in headers.
Retrying.

—16:41:57— http://www.gnu.org/
(try: 2) => `index.html’
Connecting to www.gnu.org[199.232.41.10]:80. connected.
HTTP request sent, awaiting response.
Read error (Connection reset by peer) in headers.
Retrying.

—16:41:59— http://www.gnu.org/
(try: 3) => `index.html’
Connecting to www.gnu.org[199.232.41.10]:80. connected.
HTTP request sent, awaiting response.
Read error (Connection reset by peer) in headers.
Retrying.

—16:46:36— http://www.gnu.org/
(try:20) => `index.html’
Connecting to www.gnu.org[199.232.41.10]:80. connected.
HTTP request sent, awaiting response.
Read error (Connection reset by peer) in headers.
Giving up.

BTW, if I run this:
ping www.gnu.org
I get this output:
ICMP Net redirect from gateway vic.xxxxxxxxxx.co.nz (xxx.xx.xx.xx)
to xxxxx.xxxxxxxxxx.co.nz (xxx.xx.xx.xx) for www.gnu.org (199.232.41.10)

(NOTE: I’ve ‘x’d out private info.)

Any ideas why wget from working for me?
Is there anything more info I can provide to help resolve this?

Источник

Subject of the issue

Read error (Connection reset by peer) in headers.

Your environment

  • Bitwarden_rs version: 1.16.3
  • Install method: docker
  • Clients used: web/curl/wget
  • Reverse proxy and version: NA
  • Version of mysql/postgresql: NA
  • Other relevant information:
[root@centbox bitwarden]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@centbox bitwarden]# docker --version
Docker version 18.09.7, build 2d0083d
[root@centbox bitwarden]# docker-compose --version
docker-compose version 1.24.0, build 0aa59064
[root@centbox bitwarden]#

Steps to reproduce

Start/install the application with docker:
docker run -d --name bitwarden -e WEBSOCKET_ENABLED=true -e LOG_FILE=/data/bitwarden.log -v /opt/bitwarden/:/data/ -p 32080:80 -p 3012:3012 bitwardenrs/server:latest

Expected behaviour

To be able to load the interface via web browser, or get a html response via wget/curl.

Actual behaviour

docker starts okay:

[root@centbox bitwarden]# docker logs bitwarden
/--------------------------------------------------------------------
|                       Starting Bitwarden_RS                        |
|                           Version 1.16.3                           |
|--------------------------------------------------------------------|
| This is an *unofficial* Bitwarden implementation, DO NOT use the   |
| official channels to report bugs/features, regardless of client.   |
| Send usage/configuration questions or feature requests to:         |
|   https://bitwardenrs.discourse.group/                             |
| Report suspected bugs/issues in the software itself at:            |
|   https://github.com/dani-garcia/bitwarden_rs/issues/new           |
--------------------------------------------------------------------/

[2020-09-08 19:05:03.446][ws][INFO] Listening for new connections on 0.0.0.0:3012.
[2020-09-08 19:05:03.446][start][INFO] Rocket has launched from http://0.0.0.0:80
[root@centbox bitwarden]#

However trying to connect to the service on the server via IP in browser (http://192.168.2.25:32080/) fails with an «This site can’t be reached. 192.168.2.25 took too long to respond.».

Also accessing via command line directly on the server via wget or curl gets me the connection reset:

[root@centbox bitwarden]# wget http://localhost:32080
--2020-09-08 15:10:48--  http://localhost:32080/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:32080... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying. ^C
[root@centbox bitwarden]#
[root@centbox bitwarden]# curl http://localhost:32080
curl: (56) Recv failure: Connection reset by peer
[root@centbox bitwarden]#

Docker appears okay:

[root@centbox bitwarden]# docker ps -a
CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS                   PORTS                                           NAMES
a172efe73114        bitwardenrs/server:latest             "/start.sh"         6 minutes ago       Up 6 minutes (healthy)   0.0.0.0:3012->3012/tcp, 0.0.0.0:32080->80/tcp   bitwarden
[root@centbox bitwarden]#

No other information appears in the docker log (via docker logs bitwarden) or via the /data/bitwarden.log file. It’s as if the connection doesn’t each the application in docker.

The port chosen (32080) isn’t used for anything else.

Anyone got any ideas? Thanks.

EDIT:
If I run curl within the container, you get the expected response:

[root@centbox bitwarden]# docker exec -it bitwarden curl http://localhost
<!DOCTYPE html>
<html>

<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=1010">
    <meta name="theme-color" content="#175DDC">

    <title page-title>Bitwarden Web Vault</title>

    <link rel="apple-touch-icon" sizes="180x180" href="images/icons/apple-touch-icon.png">
    <link rel="icon" type="image/png" sizes="32x32" href="images/icons/favicon-32x32.png">
    <link rel="icon" type="image/png" sizes="16x16" href="images/icons/favicon-16x16.png">
    <link rel="mask-icon" href="images/icons/safari-pinned-tab.svg" color="#175DDC">
    <link rel="manifest" href="manifest.json">
<link href="app/main.ec1191668ddd60d16e05.css" rel="stylesheet"></head>

<body class="layout_frontend">
    <app-root>
        <div class="mt-5 d-flex justify-content-center">
            <div>
                <img src="images/logo-dark@2x.png" class="mb-4 logo" alt="Bitwarden">
                <p class="text-center">
                    <i class="fa fa-spinner fa-spin fa-2x text-muted" title="Loading" aria-hidden="true"></i>
                </p>
            </div>
        </div>
    </app-root>
<script type="text/javascript" src="app/polyfills.ec1191668ddd60d16e05.js"></script><script type="text/javascript" src="app/vendor.ec1191668ddd60d16e05.js"></script><script type="text/javascript" src="app/main.ec1191668ddd60d16e05.js"></script></body>

</html>
[root@centbox bitwarden]# 

I’m simply trying to clone a repository from github, but I think my problem spans greater than git or github. I’ve tried the following methods:

sudo wget ‘http://github.com/symfony/symfony/tarball/master

--2010-07-30 07:51:36--  'http://github.com/symfony/symfony/tarball/master
Resolving github.com... 207.97.227.239
Connecting to github.com|207.97.227.239|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.

--2010-07-30 07:51:38--  (try: 2)  'http://github.com/symfony/symfony/tarball/master
Connecting to github.com|207.97.227.239|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.

--2010-07-30 07:51:40--  (try: 3)  'http://github.com/symfony/symfony/tarball/master
Connecting to github.com|207.97.227.239|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.

Since wget wasn’t working, I figured I’d try using git (knowing that my firewall was probably blocking the git protocol). As you can see, it looks like the firewall did block it.

sudo git clone git://github.com/symfony/symfony.git

Initialized empty Git repository in /home/myname/symfony/.git/

github.com[0: 207.97.227.239]: errno=Connection refused
fatal: unable to connect a socket (Connection refused)

Since, the git protocol didn’t work I figured I’d try the http method.

sudo git clone ‘http://github.com/symfony/symfony.git

Initialized empty Git repository in /home/myname/symfony/.git/
error:  while accessing 'http://github.com/symfony/symfony.git/info/refs

fatal: HTTP request failed

So I figured I’d try another method of getting a file from the internet, but to no avail.

sudo curl www.google.com

curl: (56) Failure when receiving data from the peer

So then I tried pinging just to make sure that I see the outside world, and it works!

ping www.google.com

PING www.l.google.com (74.125.91.104) 56(84) bytes of data.
64 bytes from qy-in-f104.1e100.net (74.125.91.104): icmp_seq=1 ttl=49 time=46.4 ms
64 bytes from qy-in-f104.1e100.net (74.125.91.104): icmp_seq=2 ttl=49 time=46.5 ms
64 bytes from qy-in-f104.1e100.net (74.125.91.104): icmp_seq=3 ttl=49 time=46.5 ms

--- www.l.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 46.477/46.499/46.512/0.176 ms

apt-get install works successfully. Does apt-get use a different protocol other than http or does it use different ports other than 80?

I should mention that I’m using a VMware installation of Kubuntu 10.04. Has anyone ever encountered a problem like this? What other methods could you think of to narrow down where the problem is coming from?

Note: I had to add a single quote (‘) before each hyperlink in this post since I don’t have enough rep.

@uloBasEI
What was returned is below (minus the HTML of google’s homepage)

HTTP/1.0 200 OK
Date: Fri, 30 Jul 2010 16:13:45 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Set-Cookie: PREF=ID=217dc6a999f1ffaf:TM=1280506425:LM=1280506425:S=gnL_tcT4FJLh9Cgh; expires=Sun, 29-Jul-2012 16:13:45 GMT; path=/; domain=.google.com
Set-Cookie: NID=37=fIVPDdQeoCyfwgmhtAGDf06le4T450U4v19oMdSBCQQDe67Ys5bHwMaGsnywEjUkGSk0Ex5BRGFDouO5Fsme0uARoU3uTNmeTzKfi4mq-L8jDOtcBTC88cCDg0DSpjBr; expires=Sat, 29-Jan-2011 16:13:45 GMT; path=/; domain=.google.com; HttpOnly
Server: gws
X-XSS-Protection: 1; mode=block

  • #1

Hi,

I’m trying to install pve 7.2 on Bullseye, but keep hitting the 401 unauthorized return code. If I try and hit the URL https://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/Packages in the browser I get hit with a login prompt. I’me definitely using he non-subscription address:

Code:

[sources.list]

deb https://deb.debian.org/debian bullseye main
deb https://security.debian.org/debian-security bullseye-security main
deb https://deb.debian.org/debian bullseye-updates main

deb https://download.proxmox.com/debian/pve bullseye pve-no-subscription


[/etc/apt/sources.list.d/pve-install-repo.list]
deb [arch=amd64] http://download.proxmox.com/debian/pve bullseye pve-no-subscription

apt update error:

Code:

Err:6 https://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 Packages
  401  Unauthorized [IP: 51.91.38.34 443]
Ign:7 https://download.proxmox.com/debian/pve bullseye/pve-no-subscription all Packages
Ign:8 https://download.proxmox.com/debian/pve bullseye/pve-no-subscription Translation-en_GB
Ign:9 https://download.proxmox.com/debian/pve bullseye/pve-no-subscription Translation-en
Reading package lists... Done
N: Ignoring file 'pve-install-repo.list.old' in directory '/etc/apt/sources.list.d/' as it has an invalid filename extension
W: The repository 'https://download.proxmox.com/debian/pve bullseye Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/Packages  401  Unauthorized [IP: 51.91.38.34 443]

There is definitely an authorization prompt on the non-sub endpoint. Is the option to install on debian using apt only available to subscribed accounts now? Is there a way I can install from the CD media using apt to avoid this?

oguz

oguz

Proxmox Retired Staff


  • #3

I tried using the http address initially, it just hangs waiting for headers

oguz

oguz

Proxmox Retired Staff


  • #4

I tried using the http address initially, it just hangs waiting for headers

can you curl it? maybe there’s something intercepting your traffic?

try curl http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/Packages

  • #5

If you try manually hitting that endpoint with I was just responding. Exactly, curl fails:

Code:

root@debian:/etc/apt/apt.conf.d# curl -v http://download.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/Packages
*   Trying 51.91.38.34:80...
* Connected to download.proxmox.com (51.91.38.34) port 80 (#0)
> GET /debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/Packages HTTP/1.1
> Host: download.proxmox.com
> User-Agent: curl/7.74.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

I’ve tried adding another workaround to the apt conf:

Code:

Acquire::http::Pipeline-Depth "0";

No effect. A browser can’t hit that http endpoint, just times out.

  • #6

Ah hang on….. <facepalm>

Code:

Connected to download.proxmox.com (51.91.38.34) port 80 (#0)

So the endpoint is responding. Must be something in my OS then, it’s a fresh debian build

  • #7

It is weird though, a `wget` also fails:

Code:

W: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/InRelease  Connection failed [IP: 51.91.38.34 80]
W: Some index files failed to download. They have been ignored, or old ones used instead.

root@debian:/etc/apt/apt.conf.d# wget http://download.proxmox.com/debian/pve/dists/bullseye/InRelease
--2022-07-27 15:35:11--  http://download.proxmox.com/debian/pve/dists/bullseye/InRelease
Resolving download.proxmox.com (download.proxmox.com)... 51.91.38.34, 2001:41d0:203:7470::34
Connecting to download.proxmox.com (download.proxmox.com)|51.91.38.34|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.

--2022-07-27 15:36:17--  (try: 2)  http://download.proxmox.com/debian/pve/dists/bullseye/InRelease
Connecting to download.proxmox.com (download.proxmox.com)|51.91.38.34|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying

  • #8

The address is a CNAME so turned down some security for apt, still fails:

Code:

root@debian:/etc/apt/apt.conf.d# apt-get update
Hit:1 https://security.debian.org/debian-security bullseye-security InRelease
Hit:2 https://deb.debian.org/debian bullseye InRelease
Hit:3 https://deb.debian.org/debian bullseye-updates InRelease
Err:4 http://download.proxmox.com/debian/pve bullseye InRelease
  Connection failed [IP: 51.91.38.34 80]
Reading package lists... Done
W: Failed to fetch http://download.proxmox.com/debian/pve/dists/bullseye/InRelease  Connection failed [IP: 51.91.38.34 80]
W: Some index files failed to download. They have been ignored, or old ones used instead.

oguz

oguz

Proxmox Retired Staff


  • #9

hmm, that’s really weird.

if i try to curl it from an external server (outside of proxmox network) i have no problem getting the packages, so it could be something related to your setup or location.

* where is your server located? e.g. hoster, self-hosted at home, workplace etc. (and which country)

* can you reach other http and https sites normally? e.g. curl https://www.google.com and curl http://www.google.com ?

* can you test: curl http://de.cdn.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/Packages (to try the europe CDN)

  • #10

Hey @oguz,

Apologies, was away for a few days. So my server is located at home, direct connection, not via proxy. The curl works fine:

root@debian:~# curl http://de.cdn.proxmox.com/debian/pve/dists/bullseye/pve-no-subscription/binary-amd64/Packages | more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 —:—:— —:—:— —:—:— 0Package: btrfs-progs-dbgsym
Architecture: amd64
Version: 5.16.2-1~bpo11+1
Auto-Built-Package: debug-symbols
Priority: optional
Section: debug
Source: btrfs-progs
Maintainer: Adam Borowski <kilobyte@angband.pl>

Obv. the google addresses also curl without issue.

I agree, this appears OS related, but I’ve been unable to find any settings that would block

Update: So I’ve just retried the manual steps here and it works now! In the interim the only thing that has changed is I’ve switched to an Orbi wifi network from my telecoms router’s default wifi, although still use that same router as GW for outbound traffic. So it appears the issue here was related to my wifi settings.

Thanks for the support!

oguz

oguz

Proxmox Retired Staff


  • #11

Hey @oguz,

Apologies, was away for a few days. So my server is located at home, direct connection, not via proxy. The curl works fine:

Obv. the google addresses also curl without issue.

I agree, this appears OS related, but I’ve been unable to find any settings that would block

Update: So I’ve just retried the manual steps here and it works now! In the interim the only thing that has changed is I’ve switched to an Orbi wifi network from my telecoms router’s default wifi, although still use that same router as GW for outbound traffic. So it appears the issue here was related to my wifi settings.

Thanks for the support!

you’re welcome, please mark the thread as [SOLVED] :)

I am using the following code to download a list of pdf files:

wget -i list.txt -A .pdf

Some pdf files are downloaded properly. However, some pdf files are not downloaded properly. When I check the log, I see the following report:

--2013-04-09 11:25:42--  http://amazon.com/111.pdf
Reusing existing connection to amazon.com:80.
HTTP request sent, awaiting response... 200 No headers, assuming HTTP/0.9
Length: unspecified
Saving to: `111.pdf'


    [                                         <=>       ] 1,045       --.-K/s   in 2m 9s


2013-04-09 11:27:51 (8.11 B/s) - Read error at byte 1045 (Connection reset by peer).Retrying.


--2013-04-09 11:27:52--  (try: 2)  http://amazon.com/111.pdf
Connecting to amazon.com (amazon.com)|00.00.55.888|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2680728 (2.6M) [application/pdf]
Saving to: `111.pdf'


61% [==============================>                    ] 1,649,221   10.0K/s   in 2m 41s


2013-04-09 11:30:41 (10.0 KB/s) - Read error at byte 1649221/2680728 (Connection reset by peer). Retrying.


--2013-04-09 11:30:43--  (try: 3)  http://amazon.com/111.pdf
Connecting to amazon.com (amazon.com)|00.00.55.888|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2680728 (2.6M) [application/pdf]
Saving to: `111.pdf'


100%[==================================================>] 2,680,728   10.1K/s   in 4m 22s


2013-04-09 11:35:11 (10.0 KB/s) - `111.pdf' saved [2680728/2680728]

I wonder why I can not open the pdf file 111.pdf? The above report says that it is 100% downloaded. Is it because of the connection reset by the peer?

I wonder if it is possible to put the wget in a loop for every file, in such a way that it does not exit the loop, until the download is done with no error?
I found the following loop, however, it gives an error. The code and the error is shown below:

Code:

while read -r link
do
        wget -A .pdf
        until [ $? = 0 ]
        do
            wget -A .pdf
        done
done < ./list.txt

Error:

Try `wget --help' for more options.
wget: missing URL
Usage: wget [OPTION]... [URL]...

I am using Cygwin on Windows.

Please let me know if you have other suggestions as well.

Thank you for your help.

Working Scenario

I have the following scenario with two subnets

  Subnet 1: 192.168.1.0/24
  Subnet 2: 192.168.2.0/24

and three routers

Router 1: 192.168.1.1
Router 2: 192.168.2.1
Router 3: 192.168.1.10 / 192.168.2.10

Routers 1 and 2 are connecting each subnet to the internet and are configured on all client machines as default gateway. Router 3 connects the two subnets. For routing between subnets to be working, I configured 2 static routes

  Router 1: Destination 192.168.2.0, Netmask 255.255.255.0, Gateway 192.168.1.10
  Router 2: Destination 192.168.1.0, Netmask 255.255.255.0, Gateway 192.168.2.10

My goal is for clients in subnet 1 to be able to access a http/https server on subnet 2

Client 1: 192.168.1.20        (Debian 9)
Server 1: 192.168.2.20        (Windows Server)

From Client 1 I can ping and wget Server 1:

# ping 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=2 ttl=128 time=0.993 ms
64 bytes from 192.168.2.20: icmp_seq=3 ttl=128 time=0.943 ms
64 bytes from 192.168.2.20: icmp_seq=4 ttl=128 time=1.06 ms
64 bytes from 192.168.2.20: icmp_seq=5 ttl=128 time=1.77 ms
^C
--- 192.168.2.20 ping statistics ---
5 packets transmitted, 4 received, 20% packet loss, time 4046ms
rtt min/avg/max/mdev = 0.943/1.193/1.774/0.338 ms

# wget -O- http://192.168.2.20/
--2019-06-26 08:59:12--  http://192.168.2.20/
Connecting to 192.168.2.20:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: /Test [following]
--2019-06-26 08:59:15--  http://192.168.2.20/Test
Reusing existing connection to 192.168.2.20:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘STDOUT’

-                                   [<=>                                                   ]       0  --.-KB/s
<!DOCTYPE html>
<html>Hello World!</html>
-                                   [ <=>                                                  ]   5.77K  --.-KB/s    in 0.002s

2019-06-26 08:59:15 (2.79 MB/s) - written to stdout [5912]

So far everything is working fine. Only the first ping sometimes gets lost, which to my understanding should not be an issue.

Problem

Now I had to set up OpenVPN server on Client 1 (tun0, Subnet 10.0.0.0/24). In order for VPN clients to get access not only to Client 1 but also other clients in Subnet 1, I activated IP forwarding in /etc/sysctl.conf

net.ipv4.ip_forward=1

The moment I activate IP forwarding, I start having problems with my routing setup described above. From Client 1 I can still ping Server 1, but with wget communication is not working properly anymore:

# ping 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
From 192.168.1.1: icmp_seq=2 Redirect Host(New nexthop: 192.168.1.10)
64 bytes from 192.168.2.20: icmp_seq=2 ttl=128 time=0.935 ms
From 192.168.1.1: icmp_seq=3 Redirect Host(New nexthop: 192.168.1.10)
64 bytes from 192.168.2.20: icmp_seq=3 ttl=128 time=0.880 ms
From 192.168.1.1: icmp_seq=4 Redirect Host(New nexthop: 192.168.1.10)
64 bytes from 192.168.2.20: icmp_seq=4 ttl=128 time=0.975 ms
^C
--- 192.168.2.20 ping statistics ---
4 packets transmitted, 3 received, 25% packet loss, time 3041ms
rtt min/avg/max/mdev = 0.880/0.930/0.975/0.038 ms

# wget -O-  http://192.168.2.20
--2019-06-26 14:08:38--  http://192.168.2.20/
Connecting to 192.168.2.20:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.

--2019-06-26 14:09:12--  (try: 2)  http://192.168.2.20/
Connecting to 192.168.2.20:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.

^C

wget still manages to establish a connection to server 1, but then waits about 30 seconds for a response before showing above error.

Any ideas on why IP forwarding is breaking the communication between Client 1 and Server 1? Any suggestions on how to properly configure Client 1, so that both routing to Server 1 and access from OpenVPN clients to Subnet 1 will work at the same time?

I would prefer not to configure additional static routes on Client 1 but keep the static routes on the default gateways.

Понравилась статья? Поделить с друзьями:
  • Human error twitter
  • Human error probability
  • Human error part 1
  • Human error half life 2 скачать торрент на русском
  • Human error half life 2 как установить