Rsync sender write error broken pipe 32

Rsync sender write error broken pipe 32 shows up due to insufficient disk space or network connection timeout. By proactive monitoring we can avoid it.

Have you just got a rsync sender write error broken pipe 32?  We can help you to fix it.

Usually, this error message indicates a dropping network connection or a full disk space at the remote end. It’s quite easy to avoid it with proper server monitoring.

At Bobcares, we receive requests to fix failed rsync operations, as a part of our Server Management Services.

Today, we will see how our Support Engineers fix this error and see the possible ways to avoid it.

Why does rsync end with write error broken pipe?

Rsync is a popular file synchronization tool for both local and remote file transfer. It is a faster file transfer option. Because, if the destination already has the source file then it just copies the differences.

But sometimes rsync operations end up in errors. Today, let’s have a look at the write error broken pipe.

The main reasons why it shows the broken pipe errors are insufficient disk space and idle connection.

The complete error message appears as,

rsync sender write error broken pipe [32].

It is always followed by error codes according to different scenarios.

For instance error code 12 denotes error in the rsync protocol data stream. Similarly, error code 10 denotes error in socket I/O and so on.

How we fix the rsync write error broken pipe?

Let’s see how our Support Engineers fix the broken pipe error.

1. Insufficient disk space causes broken pipe error

Customers often receive the broken pipe error when the disk space is full. In a recent helpdesk request, the reason for the rsync error was disk full. Our Support Engineers checked the disk space and found that the /home of the server was at 100% usage.

Therefore, we did a followup with the customers and removed unwanted files. This fixed the rsync error and it started working. In some cases, clearing the files may not be an option. Then, we ask them to increase their server disk space for further usage.

Rest assured, our Dedicated Engineers always check the disk space before transferring large files.

2. Rsync fails due to idle network connection

Many times an idle connection can also cause a broken pipe error in remote file transfer.

In such cases, we use --timeout option in the rsync command. Thus rsync sends keep-alive messages to the remote server to prevent a timeout.

How do we avoid this sender write error?

With proper monitoring and proactive actions, we can avoid rsync errors due to full disk space and idle connection. Let’s see how our Support Engineers do it.

To avoid rsync errors due to full disk space, we do the follows measures.

  • Regularly track the rsync progress.
  • Evaluate disk space using monitoring scripts. So that we get an alert email when disk space reaches a specified limit.
  • Splitting up larger migration into batches so that we can keep a better eye on it.
  • And we do the systematic cleanup of backups to manage the disk space.

Similarly, we configure ssh to prevent a timeout. For this, we set a few parameters like KeepAlive, ServerAliveInterval, ServerAliveCountMax, etc.

[Need assistance in fixing rsync errors? – We can help you.]

Conclusion

In short, rsync sender write error broken pipe 32 shows up due to insufficient disk space or network connection timeout. Today, we saw how our Support Engineers fix this error. We also saw the proactive measure we take to avoid this error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

I have encountered this with rsync in the past as well. The solution that fixed it for me was running it from within a screen session, which was able to help maintain the connection to the remote server.

screen -LS rsync
[execute your rsync command]
Ctrl-A+D to detach from the session

You can check the status by running screen -x rsync (or whatever you decide to name the session if you do give it a name, which is not required). This will re-attach your current shell to that session. Just remember to detach from it again after you have checked the status so that it keeps running in the background.

You can also execute the command to run via screen in the background in one fail swoop by doing [someone please correct me if I’m wrong] screen -dm 'command'. You may want to man screen before trying that last one.

EDIT:

I am editing my answer because you have confirmed that screen provides no assistance in this scenario, but you replied to my comment suggesting to try scp and see what kind of results you get, to which you replied that oddly enough, it worked just fine.

So my new answer is this: use scp — or ssh (with tar) — instead of rsync

Granted, scp doesn’t support the vast number of features as rsync, but you’d actually be surprised to discover just how many features that it does support that are almost identical to that of rsync.

Real world scenarios for scp and other alternatives to rsync:

Awhile back, I was tasked with creating a shell script that would pull logs from our production servers and store them locally on a web server so that developers could access them for troubleshooting purposes. After trying unsuccessfully to get the Unix team to install rsync on our servers, I came up with a workaround using scp that worked just as well.

That being said, I recently modified the script so that all it uses is ssh and tarGNU tar/gtar, to be exact. GNU tar supports many of the options that you will actually find in rsync, such as --include, --exclude, permission/attribute preservation, compression, etc.

The way I now accomplish this is by ssh-ing to the remote server (via pubkey auth) and using gtar -czf - [other options such as --include='*.log' and --exclude='*core*', etc.] — this writes all of the info to stdout, which is then piped [locally] to tar -xzf so that no changes are made on the remote production server, and all of the files pulled as-is to the local server. It’s a great alternative to rsync in this case. The only thing important thing neither tar nor scp support are incremental backups and the level of block-level error checking that rsync features.

The full command I am referring to when using ssh and tar would be something like this (remote is Solaris 10; local is Debian, for what it’s worth):

cd /var/www/remotelogs
ssh -C user@remotehost "cd /path/to/remote/app.directories; gtar -czf - --include='*.log' --exclude='*.pid' --exlude='*core*' *" | tar -xz

In your scenario it would be the opposite — tar -cf - locally, and pipe to remote server via ssh user@remotehost "tar -xf -" — there is another answer that references this type of behavior but doesn’t go into as much detail.

There are a few other options that I have included to speed things up. I timed everything relentlessly to get the execution time as low as possible. You would think that using compression with tar would be pointless, but it actually speeds things up a bit, as does using the -C flag with ssh to enable ssh compression as well. I may update this post at a later date to include the exact command that I use (which is very similar to what I posted), but I don’t feel like getting on VPN at the moment since I’m on vacation this week.

On Solaris 10, I also use -c blowfish, because it is the quickest cipher to authenticate with and also helps speed things up a tad, but our Solaris 11 either don’t support it or have this cipher suite disabled.

Additionally, if you choose to go with the ssh/tar option, it would actually be a good idea to implement my original solution of using screen if you are doing a backup that will take awhile. If not, make sure your keepalive/timeout settings in your ssh_config are tweaked just right, or this method will also be very likely to cause a broken pipe.

Even if you go with scp, I always find it to be a best practice to use screen or tmux when doing an operation of this sort, just in case. Many times I don’t follow my own advise and fail to do this, but it is indeed a good practice to use one of these tools to ensure that the remote job doesn’t screw up because of your active shell session getting disconnected somehow.

I know you want to figure out the root cause of your rsync issue. However, if this is really important, these are two great workarounds that you can experiment with in the meantime.

Содержание

  1. How can I fix a Broken Pipe error?
  2. 4 Answers 4
  3. Rsync sender write error broken pipe 32 – Causes and fix
  4. Why does rsync end with write error broken pipe?
  5. How we fix the rsync write error broken pipe?
  6. 1. Insufficient disk space causes broken pipe error
  7. 2. Rsync fails due to idle network connection
  8. How do we avoid this sender write error?
  9. Conclusion
  10. PREVENT YOUR SERVER FROM CRASHING!
  11. Errno 32 Broken pipe в Python – Ошибка ввода-вывода
  12. Что вызывает “[Errno 32] Broken pipe” в Python?
  13. Макрос: int EPIPE
  14. Ошибка “сломанный канал” при подключении к терминалу Linux
  15. Как избежать ошибки “сломанный канал”?
  16. Перехват IOError во избежание ошибки Broken pipe
  17. Возможное решение проблемы в многопроцессорной программе
  18. Broken Pipe Error in Python
  19. The Emergence of Broken Pipe Error
  20. Broken Pipe Error in Python terminal
  21. fatal: sha1 file ‘ ‘ write error: Broken Pipe #2428
  22. Comments

How can I fix a Broken Pipe error?

I recently reinstalled RVM (following the instructions at http://rvm.io) after a fresh install of Ubuntu 12.10 when I got an SSD Drive.

Now, when I type: type rvm | head -1

I receive the following error:

But if I immediately repeat the command then I only receive:

And it appears everything is ok? What’s happening? What can I do to fix it? It doesn’t happen always. It appears to be more sporadic. I’ve tried to find some kind of pattern to it but haven’t yet.

4 Answers 4

Seeing «Broken pipe» in this situation is rare, but normal.

When you run type rvm | head -1 , bash executes type rvm in one process, head -1 in another. 1 The stdout of type is connected to the «write» end of a pipe, the stdin of head to the «read» end. Both processes run at the same time.

The head -1 process reads data from stdin (usually in chunks of 8 kB), prints out a single line (according to the -1 option), and exits, causing the «read» end of the pipe to be closed. Since the rvm function is quite long (around 11 kB after being parsed and reconstructed by bash), this means that head exits while type still has a few kB of data to write out.

At this point, since type is trying to write to a pipe whose other end has been closed – a broken pipe – the write() function it caled will return an EPIPE error, translated as «Broken pipe». In addition to this error, the kernel also sends the SIGPIPE signal to type , which by default kills the process immediately.

(The signal is very useful in interactive shells, since most users do not want the first process to keep running and trying to write to nowhere. Meanwhile, non-interactive services ignore SIGPIPE – it would not be good for a long-running daemon to die on such a simple error – so they find the error code very useful.)

However, signal delivery is not 100% immediate, and there may be cases where write() returns EPIPE and the process continues to run for a short while before receiving the signal. In this case, type gets enough time to notice the failed write, translate the error code and even print an error message to stderr before being killed by SIGPIPE. (The error message says «-bash: type:» since type is a built-in command of bash itself.)

This seems to be more common on multi-CPU systems, since the type process and the kernel’s signal delivery code can run on different cores, literally at the same time.

It would be possible to remove this message by patching the type builtin (in bash’s source code) to immediately exit when it receives an EPIPE from the write() function.

However, it’s nothing to be concerned about, and it is not related to your rvm installation in any way.

Источник

Rsync sender write error broken pipe 32 – Causes and fix

by Keerthi PS | Jan 1, 2020

Have you just got a rsync sender write error broken pipe 32 ? We can help you to fix it.

Usually, this error message indicates a dropping network connection or a full disk space at the remote end. It’s quite easy to avoid it with proper server monitoring.

At Bobcares, we receive requests to fix failed rsync operations, as a part of our Server Management Services.

Today, we will see how our Support Engineers fix this error and see the possible ways to avoid it.

Why does rsync end with write error broken pipe?

Rsync is a popular file synchronization tool for both local and remote file transfer. It is a faster file transfer option. Because, if the destination already has the source file then it just copies the differences.

But sometimes rsync operations end up in errors. Today, let’s have a look at the write error broken pipe.

The main reasons why it shows the broken pipe errors are insufficient disk space and idle connection.

The complete error message appears as,

It is always followed by error codes according to different scenarios.

For instance error code 12 denotes error in the rsync protocol data stream. Similarly, error code 10 denotes error in socket I/O and so on.

How we fix the rsync write error broken pipe?

Let’s see how our Support Engineers fix the broken pipe error.

1. Insufficient disk space causes broken pipe error

Customers often receive the broken pipe error when the disk space is full. In a recent helpdesk request, the reason for the rsync error was disk full. Our Support Engineers checked the disk space and found that the /home of the server was at 100% usage.

Therefore, we did a followup with the customers and removed unwanted files. This fixed the rsync error and it started working. In some cases, clearing the files may not be an option. Then, we ask them to increase their server disk space for further usage.

Rest assured, our Dedicated Engineers always check the disk space before transferring large files.

2. Rsync fails due to idle network connection

Many times an idle connection can also cause a broken pipe error in remote file transfer.

In such cases, we use —timeout option in the rsync command. Thus rsync sends keep-alive messages to the remote server to prevent a timeout.

How do we avoid this sender write error?

With proper monitoring and proactive actions, we can avoid rsync errors due to full disk space and idle connection. Let’s see how our Support Engineers do it.

To avoid rsync errors due to full disk space, we do the follows measures.

  • Regularly track the rsync progress.
  • Evaluate disk space using monitoring scripts. So that we get an alert email when disk space reaches a specified limit.
  • Splitting up larger migration into batches so that we can keep a better eye on it.
  • And we do the systematic cleanup of backups to manage the disk space.

Similarly, we configure ssh to prevent a timeout. For this, we set a few parameters like KeepAlive, ServerAliveInterval, ServerAliveCountMax, etc.

[Need assistance in fixing rsync errors? – We can help you.]

Conclusion

In short, rsync sender write error broken pipe 32 shows up due to insufficient disk space or network connection timeout. Today, we saw how our Support Engineers fix this error. We also saw the proactive measure we take to avoid this error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

Источник

Errno 32 Broken pipe в Python – Ошибка ввода-вывода

В настоящее время Python считается зрелым языком программирования, который широко используется специалистами по обработке данных и инженерами по искусственному интеллекту (ИИ) из-за его простоты и легкочитаемого синтаксиса.

В данном руководстве мы обсудим [Errno 32] Broken pipe в Python, известное сообщение об ошибке, которое мы часто видим при взаимодействии с файловой системой. Мы разберем причину ее возникновения, а также способы ее избежать и исправить в коде.

Что вызывает “[Errno 32] Broken pipe” в Python?

«Сломанный канал» обычно считается ошибкой IOError (сокращение от «Ошибка ввода-вывода»), которая произошла на уровне системы Linux. Обычно она возникает при чтении и записи файлов или, другими словами, при выполнении ввода / вывода файлов или сетевого ввода / вывода (через сокеты).

Эквивалентная системная ошибка Linux – EPIPE, взятая из кодов ошибок GNU libc.

Макрос: int EPIPE

“Broken pipe.” означает, что на другом конце конвейера нет считывания процесса. Каждая функция библиотеки, вызывающая код ошибки, также выдает сигнал SIGPIPE; этот сигнал завершает программу, если не обрабатывается или не блокируется. Следовательно, программа никогда не отобразит EPIPE до тех пор, пока она не обработает или не заблокирует SIGPIPE.

Из приведенного выше утверждения мы можем сделать вывод, что система, отправляющая сигнал SIGPIPE, вызывает ошибку [Errno 32] Broken pipe в механизме межпроцессного взаимодействия Linux.

Например, система Linux внутренне использует другой сигнал, называемый SIGINT. В Linux команда Ctrl + C отправит сигнал SIGINT, чтобы завершить процесс, или мы можем использовать команду kill для достижения того же эффекта.

Python по умолчанию не игнорирует SIGPIPE. Однако он преобразует сигнал в исключение и вызывает ошибку – IOError: [Errno 32] Сломанный канал каждый раз, когда он получает SIGPIPE.

Ошибка “сломанный канал” при подключении к терминалу Linux

Всякий раз, когда мы сталкиваемся с ошибкой [Errno 32] Broken pipe при попытке передать вывод скрипта Python другой программе, например:

Вышеупомянутый синтаксис конвейера создаст процесс, отправляющий данные в восходящем направлении, и процесс, читающий данные в нисходящем направлении. Когда нисходящему потоку не нужно читать данные восходящего потока, он отправит сигнал SIGPIPE процессу восходящего потока.

Когда нисходящий поток не должен читать данные восходящего потока? Давайте разберемся в этом на примере. Команда head в этом примере должна прочитать достаточно строк, чтобы сообщить восходящему потоку, что нам больше не нужно его читать, и она отправит сигнал SIGPIPE процессу восходящего потока.

Всякий раз, когда восходящий процесс является программой Python, возникает ошибка типа IOError: [Errno 32] Broken pipe.

Как избежать ошибки “сломанный канал”?

Если мы не заботимся о правильном перехвате SIGPIPE и нам нужно быстро запустить процесс, вставьте следующий фрагмент кода в начало программы Python.

В приведенном выше фрагменте кода мы перенаправили сигналы SIGPIPE на стандартный SIG_DFL, который система обычно игнорирует.

Однако рекомендуется остерегаться руководства Python по библиотеке сигналов, чтобы предостеречь от такой обработки SIGPIPE.

Перехват IOError во избежание ошибки Broken pipe

Поскольку ошибка Broken pipe является ошибкой IOError, мы можем разместить блок try / catch, чтобы ее перехватить, как показано в следующем фрагменте кода:

В приведенном выше фрагменте кода мы импортировали модуль sys и errno и разместили блок try / catch, чтобы перехватить возникшее исключение и обработать его.

Возможное решение проблемы в многопроцессорной программе

В программах, которые используют рабочие процессы для ускорения обработки и многоядерные процессоры, мы можем попытаться уменьшить количество рабочих процессов, чтобы проверить, сохраняется ли ошибка или нет.

Большое количество рабочих процессов может конфликтовать друг с другом при попытке взять под контроль ресурсы системы или разрешение на запись на диск.

Источник

Broken Pipe Error in Python

In this article, we will discuss Pipe Error in python starting from how an error is occurred in python along with the type of solution needed to be followed to rectify the error in python. So, let’s go into this article to understand the concept well.

With the advancement of emerging technologies in the IT sector, the use of programming language is playing a vital role. Thus the proper language is considered for the fast executions of the functions. In such a case, Python emerges as the most important language to satisfy the needs of the current problem execution because of its simplicity and availability of various libraries. But along with the execution, the errors during the execution also comes into existence and it becomes difficult for the programmers to rectify the errors for the processing of the problem.

The Emergence of Broken Pipe Error

A broken Pipe Error is generally an Input/Output Error, which is occurred at the Linux System level. The error has occurred during the reading and writing of the files and it mainly occurs during the operations of the files. The same error that occurred in the Linux system is EPIPE, but every library function which returns its error code also generates a signal called SIGPIPE, this signal is used to terminate the program if it is not handled or blocked. Thus a program will never be able to see the EPIPE error unless it has handled or blocked SIGPIPE.

Python interpreter is not capable enough to ignore SIGPIPE by default, instead, it converts this signal into an exception and raises an error which is known as IOError(INPUT/OUTPUT error) also know as ‘Error 32’ or Broken Pipe Error.

Broken Pipe Error in Python terminal

This pipeline code written above will create a process that will send the data upstream and a process that reads the data downstream. But when the downstream process will not be able to read the data upstream, it will raise an exception by sending SIGPIPE signal to the upstream process. Thus upstream process in a python problem will raise an error such as IOError: Broken pipe error will occur.

Источник

fatal: sha1 file ‘ ‘ write error: Broken Pipe #2428

git push via SSH and returned this error:

The text was updated successfully, but these errors were encountered:

Can you run with GIT_TRACE=1 ? That looks more like a Git issue, since the Git LFS upload succeeded. It could also be a local SSH timeout issue, as LFS will run a short SSH command before the upload.

@technoweenie The total upload time of LFS objects was 22 minute, is there some SSH connections during the upload of LFS? Will this be the cause of SSH timeout? I ran GIT_TRACE=1 and the error was the same, I am sorry I did not record it. And then I ran git push —no-verify , the push went successfully with everything pushed completely.

is there some SSH connections during the upload of LFS? Will this be the cause of SSH timeout?

Git LFS calls ssh git@your-host.com git-lfs-authenticate . to get temporary auth for the LFS API calls. The ssh command runs and exits cleanly, so I think it’s up to your local ssh config. If you use an HTTPS git remote, or configure remote..lfsurl , you won’t have this issue.

For example, if you’re using GitHub, you could set it up like this:

This way Git will use SSH, while LFS will use HTTPS. Seems complicated though, but it’s an option.

I did a test: uninstalled the LFS and added a sleep time (20 minutes) to the pre-push hook, which resulted in an SSH timeout. As follows:

Is the case of LFS similar? I mean: when doing a git push with SSH , LFS will use too much time during that time, so it causes an SSH timeout issue.

so it causes an SSH timeout issue.

I agree with this. I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

I think there are two things we could do here:

  1. Increase the keep-alive time of your ssh-agent (this may be out of your control if the remote end closes, which it appears to do in the comment that you posted above).
  2. Teach LFS to send a keep alive byte on the SSH connection that Git opens, similar to git/git@8355868.

That commit only works during receive_pack() operations, but this is a ‘push’, so it’s calling send_pack() . We’d need some way to get access to the SSH connection that Git is opening, or teach Git the same receive.keepAlive option for send_pack operations.

@peff what do you think?

I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

Right, modulo s/LFS/Git/ in the first sentence (which I think then matches the rest of your comment). We have to do it that way because Git can’t kick off the pre-push hook until it knows what’s going to be pushed, and it doesn’t know that until it makes the ssh session and gets the server’s ref advertisement. So the server is waiting for the client to send the list of refupdate commands, during which the ssh connection is sitting idle.

It’s not clear to me what is killing the ssh connection. It could be that something at the network level is unhappy with the idle TCP connection. This could be GitHub-side reverse proxies, or just some firewall in the path. Increasing the frequency of ssh keepalives could help here.

But it could also be an application-level timeout above the ssh layer. Git by default doesn’t have any timeouts waiting for the incoming packfile, but not all servers terminate directly at actual C Git. GitHub terminates at a custom proxy layer with its own timeouts, I’m not sure what JGit does, and I have no clue what other hosts like GitLab or Atlassian do. An ssh keep-alive won’t help there; you’d need something to tell the application layer that we’re still chugging.

The right solution is to have Git send application-level keepalives while the pre-push hook is running, to tell the other side that yes, we really are doing useful work and it should keep waiting. But implementing that is going to be hard. The existing keep-alives could be hacked into the protocol only because the sender in those cases was sending sideband-encoded data. So we can send empty sideband-0 packets.

But in the phase that would need keep-alives here, the next thing to happen is the client sending the ref-update pktlines. Those are in pktline format, but there’s no sideband byte. And while technically a server can distinguish between a flush packet («0000») and an empty pktline («0004»), existing implementations don’t (and wouldn’t know what to do with an empty pktline at this stage anyway).

So you’d need a protocol extension to Git, that would work something like:

The server’s initial advertisement adds a new capability, client-keepalive .

New clients recognize that, and when used with a capable server, mention client-keepalive to tell the server they will use it.

While the pre-push hook runs, the Git client would then generate keepalive packets as part of the command-list , which the server would just throw away.

The only option I could come up with to hack a noop into the existing protocol was by sending meaningless delete refs/does/not/exist commands. But besides being a horrific hack in the first place, it also generates «warning: deleting a non-existent ref» messages. 😉

So I don’t think there’s really anything for LFS to do here. The issue is in Git, and would apply to other long-running pre-push hooks, too. It actually applies to sending the pack itself, too. If you have a large or badly packed repo, you could stall on pack-objects preparing the pack before it starts sending any bytes (this is pretty rare in practice, and is usually fixed by running git gc on the client side). Possibly a new keepalive capability should also imply that the client can send keepalives between the ref update and the start of the pack contents.

In the meantime, the obvious workarounds are:

If you have a big LFS push, do it separately beforehand, which would make the pre-push step largely a noop.

Use a protocol for the Git push that doesn’t keep a connection open. Git-over-http is stateless, and there’s no open connection while the hook runs.

Источник

I’m trying to mirror a large Mongo database between from a production server to a dev environment by stopping Mongo on both servers and then running the command:

rsync --archive --delete --recursive --verbose --compress --rsh "ssh -t -o StrictHostKeyChecking=no -i key.pem" remoteuser@remotehost:/var/lib/mongodb/ /var/lib/mongodb

It runs fine for a few minutes, but then stopped with the error:

receiving incremental file list
./
collection-228--5129329295041693519.wt
inflate returned -3 (0 bytes)
rsync error: error in rsync protocol data stream (code 12) at token.c(557) [receiver=3.1.1]
rsync: [generator] write error: Broken pipe (32)
rsync error: error in socket IO (code 10) at io.c(820) [generator=3.1.1]

Googling the error suggests it’s some sort of network connection problem, but I’m able to connect to both servers just fine.

If I re-run the command, it fails at the exact same file with the same error message. What’s causing this error and how do I fix it?

asked Nov 15, 2017 at 4:22

Cerin's user avatar

It turns out rsync accesses rsync on the remote server, and the versions where not the same between my servers. I was running 3.1.1 on the destination server, but 3.1.0 on the source server, and apparently this was enough to break the download of certain files. I install 3.1.1 on the source server, and afterwards the transfer worked flawlessly.

answered Nov 15, 2017 at 17:00

Cerin's user avatar

CerinCerin

3,56019 gold badges58 silver badges77 bronze badges

Solution:

As explained by Cerin here, just upgrade your rsync versions to ensure that they are the exact same on both the sending and receiving PCs!

I had this exact same problem: a broken pipe write error when transferring a very large file over rsync’s ssh, with rsync’s --compress (-z) flag set.

My rsync error was:

rsync: [sender] write error: Broken pipe (32)
rsync error: error in rsync protocol data stream (code 12) at io.c(837) [sender=3.1.0]

rsync --version from my SENDING PC showed rsync version 3.1.0 protocol version 31, whereas from my RECEIVING PC it showed rsync version 3.1.2 protocol version 31. Therefore, I decided to just upgrade my SENDING PC to version 3.1.2 as well. Once I did that, it worked!

How to upgrade your rsync version:

Simply follow these instructions here to install rsync from source: http://www.beginninglinux.com/home/backup/compile-rsync-from-source-on-ubuntu.

In short:

  1. Check your rsync version so you know what you currently have:

     rsync --version
    
  2. Download your desired source file for the version you want here: https://download.samba.org/pub/rsync/src/.

  3. In your GUI file manager, right-click and extract it.

  4. Build from source, and install:

     ./configure
     make
     sudo checkinstall
    
  5. Check your rsync version to ensure it was updated:

     rsync --version
    
  6. Done!

  7. (I haven’t tested this, but apparently): To remove it from your system use:

     dpkg -r rsync
    

Once I did the upgrade to get both systems’ rsync versions the same, it worked perfectly!

Related:

  1. https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/1300367
  2. https://unix.stackexchange.com/questions/242898/rsync-keeps-disconnecting-broken-pipe/547861#547861
  3. Google search for «rsync write error broken pipe» — https://www.google.com/search?q=rsync+write+error+broken+pipe&oq=rsync+write+error+&aqs=chrome.0.0j69i57j0l4.2263j0j7&sourceid=chrome&ie=UTF-8&sec_act=d
  4. [my answer] https://askubuntu.com/questions/791002/how-to-prevent-sshfs-mount-freeze-after-changing-connection-after-suspend/942820#942820

answered Oct 21, 2019 at 7:15

Gabriel Staples's user avatar

5

Are you trying yo sync data between local and remote servers, remote server looks like aws ec2 you can use below command to sync data

rsync -ravhz «ssh -i /path/to/EC2_KEY.pem» /path/to/local/files/* EC2_USER@EC2_INSTANCE:/path/to/remote/files

Please check from and to servers before syncing as you might sync in wrong direction

If your are trying to sync from ec2 to your local server then check if you have opened proper ports between servers

Try to telnet first and check connectivity between servers, you have to whitelist ip’s and ports as some firewalls may block data transfer

answered Nov 15, 2017 at 6:25

Vijay Muddu's user avatar

Vijay MudduVijay Muddu

4362 silver badges9 bronze badges

I came to this question because I had this problem. After spending 5 hours trying to solve this I finally found a solution. For me the only thing that worked was to change the MTU size. I ran this command:

sudo ifconfig enp42s0 mtu 1024

The default size was 1500 and I changed it to 1024. Run ifconfig to determine the name of your network interface. In my case the name was enp42s0 wich is very weierd. You will probably have something like eth0.

The solution sucks cause now when using Google meet I am not able to share my screen. If I change the MTU size back to the default value 1500 then I am able to share my screen when using Google meet. I wonder what else will stop working after changing the value to 1024.

I am using the linux distribution pop-os that is based on ubuntu. I belive it is a network driver issue.

answered Apr 7, 2022 at 3:24

Tono Nam's user avatar

Tono NamTono Nam

3123 silver badges17 bronze badges

Out of the blue, upon vagrant up, rsync started throwing the error below:

There was an error when attempting to rsync a synced folder.
Please inspect the error message below for more info.

Host path: /c/Users/David/Sandbox/citypantry/frontend/
Guest path: /home/citypantry/project/frontend
Command: rsync --verbose --archive -z --copy-links --chmod=ugo=rwX --no-perms --no-owner --no-group --rsync-path sudo rsync -e ssh -p 2222 -o ControlMaster=auto -o ControlPath=C:/bin/cygwin64/tmp/ssh.588 -o ControlPersist=10m -o StrictHostKeyChecking=no -o IdentitiesOnly=true -o UserKnownHostsFile=/dev/null -i 'C:/Users/David/Sandbox/citypantry/vagrant/.vagrant/machines/default/virtualbox/private_key' --exclude .vagrant/ --exclude app/cache/ --exclude app/logs/ --exclude node_modules /c/Users/David/Sandbox/citypantry/frontend/ vagrant@127.0.0.1:/home/citypantry/project/frontend
Error: Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
select: Interrupted system call
rsync: [sender] write error: Broken pipe (32)
rsync error: unexplained error (code 255) at io.c(820) [sender=3.1.1]

This is Windows 10 running a cygwin (x64) with rsync 3.1.1 on ConEmu (x64). Nothing that I know of has changed in the system and the sync has worked for weeks without any problems.

What does this error mean and how can I fix it?

asked Dec 26, 2015 at 10:04

David's user avatar

1

Looks like a problem with Vagrant 1.8.0: https://github.com/mitchellh/vagrant/issues/6702

Updating it should solve the problem. If you can’t then the workaround is to edit $VAGRANT_HOMEembeddedgemsgemsvagrant-1.8.0pluginssynced_foldersrsynchelper.rb and remove lines 77~79:

"-o ControlMaster=auto " +
"-o ControlPath=#{controlpath} " +
"-o ControlPersist=10m " +

answered Jul 13, 2016 at 11:55

ldnunes's user avatar

ldnunesldnunes

1793 silver badges14 bronze badges

The Broken pipe error usually happens when you hit the connection timeout. It can happen with rsync when it’s calculating the file differences on the remote and it didn’t respond to the client on time.

To avoid this, try increasing the server alive interval value in your SSH config (~/.ssh/config):

Host *
  ServerAliveInterval 30
  ServerAliveCountMax 6

Also consider doing similar on the remote (in /etc/ssh/sshd_config), e.g.

ClientAliveInterval 30
ClientAliveCountMax 6

See: What the options ServerAliveInterval and ClientAliveInterval mean?


Alternatively add the following keep alive line into your Vagrantfile:

config.vm.ssh.keep_alive = true

If you think it’s a control master issue, the recent Vagrant version ignore to use it on windows platforms.

Community's user avatar

answered Dec 2, 2016 at 20:25

kenorb's user avatar

kenorbkenorb

149k80 gold badges668 silver badges723 bronze badges

  • Home
  • Forum
  • The Ubuntu Forum Community
  • Ubuntu Specialised Support
  • Ubuntu Servers, Cloud and Juju
  • Server Platforms
  • [SOLVED] rsync error 32 broken pipe??

  1. rsync error 32 broken pipe??

    Hello Everyone! I need some help. I’m setting up my Ubuntu Server to sync my mp3 files over a network. I need it to pull files from my laptop and store them on my external hard drive. However the sync keeps failing with an error and I can’t figure out why. The remote drive is a ‘My Book’ external drive with a shared file on it.

    Here is the sync code that I am running:

    <rsync -arsvzPn josh@192.168.1.101:’/home/josh/Music/Sync Test Local/’ /media/josh/My Book/My Network Storage/SyncTestRemote>

    Here is the error that I’m getting:

    <receiving incremental file list
    rsync: change_dir «/home/josh/Music/Sync Test Local» failed: No such file or directory (2)

    sent 8 bytes received 106 bytes 76.00 bytes/sec
    total size is 0 speedup is 0.00 (DRY RUN)
    rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1655) [Receiver=3.1.1]
    rsync: [Receiver] write error: Broken pipe (32)>

    What am I doing wrong, or what am I not seeing? Thank you very much for your time.


  2. Re: rsync error 32 broken pipe??

    Thread moved to Server Platforms.


  3. Re: rsync error 32 broken pipe??

    I’m not a frequent user of rsync commands but I’m guessing that the command should look like this ..

    Code:

    <rsync -arsvzPn 'josh@192.168.1.101:/home/josh/Music/Sync Test Local/' '/media/josh/My Book/My Network Storage/SyncTestRemote'>

    note where the single quotes are placed ..

    Why not use Grsync (gui) in dry run mode to test your commands?


  4. Re: rsync error 32 broken pipe??

    I figured out what was wrong. No quotes are needed at all.

    correct syntax is as follows:

    rsync -arsvzPn josh@192.168.1.101:/home/josh/Music/Sync Test Local/ /media/josh/My Book/My Network Storage/SyncTestRemote


Bookmarks

Bookmarks


Posting Permissions

When I try to rsync -qaPH source/ 192.168.1.21:/var/backups I get

rsync: [sender] write error: Broken pipe (32)
rsync error: unexplained error (code 255) at io.c(837) [sender=3.1.0]

Whats wrong with my command?

asked Nov 29, 2014 at 22:10

Alex's user avatar

1

To investigate, add one or more -v options to the rsync command.
Also, try to use plain ssh:

ssh -v 192.168.1.21 /bin/true

to find out whether it is rsync or the underlying ssh connection that is causing the trouble.

answered May 18, 2015 at 8:38

Arjen's user avatar

ArjenArjen

3663 silver badges5 bronze badges

1

255 is actually not a «native» rsync return code. rsync scrapes the 255 error code from SSH and returns it. It looks to me like something on the destination server is blocking SSH or breaking it once it’s connected, hence, «broken pipe». I disagree with @kenorb because if it were a timeout issue you would probably be seeing rsync exit codes 30 or 35.

answered Jul 19, 2017 at 18:30

medley56's user avatar

medley56medley56

2152 silver badges10 bronze badges

Broken pipe error most likely means that you’ve hit the timeout. For example the remote rsync command started to calculate the file differences, but it didn’t replied to the client on time.

If this happens very often, add these settings to your local ~/.ssh/config:

Host *
  ServerAliveInterval 30
  ServerAliveCountMax 6

and on the remote server (if you’ve got the access), setup these in your /etc/ssh/sshd_config:

ClientAliveInterval 30
ClientAliveCountMax 6

See: What the options ServerAliveInterval and ClientAliveInterval mean?

Community's user avatar

answered Dec 2, 2016 at 20:10

kenorb's user avatar

kenorbkenorb

9,6362 gold badges74 silver badges88 bronze badges

I got the 255 error when rsnapshot using rsync encountered the situation «The ECDSA host key for foobar has changed». The rest of the error message included «POSSIBLE DNS SPOOFING DETECTED» and «REMOTE HOST IDENTIFICATION HAS CHANGED».

I had recreated my server and so the fingerprint had changed. I removed the ECDSA key (ssh-keygen -f "/home/bruce/.ssh/known_hosts" -R "foobar") and agreed to the new one when I sshed into foobar again.

ssh -vvv foobar gave me the info I needed to see this issue.

Pilot6's user avatar

Pilot6

86.1k89 gold badges196 silver badges303 bronze badges

answered May 12, 2022 at 12:45

Bruce's user avatar

BruceBruce

111 bronze badge

I know this issue is old, but maybe someone (like me) still have the error.

a) Check if the ssh service is running:

sudo service ssh status

b) Check the connection with triple verbose command:

ssh -vvv <hostname>

c) Maybe you use the wrong ssh-key or the key is broken in some way.

Vine

answered May 4, 2017 at 8:06

vine's user avatar

I had a similar error using rsync via my deploy for an Ember app (ember-cli-deploy). I had to configure correctly my ssh (add private keys to my ~/.ssh/)

answered Oct 26, 2017 at 14:38

morhook's user avatar

morhookmorhook

1,52713 silver badges21 bronze badges

Понравилась статья? Поделить с друзьями:
  • Rsync chdir failed error
  • Rsync error some files attrs were not transferred see previous errors code 23
  • Rsi launcher installer error
  • Rsg ms verify gta 5 как исправить
  • Rsync error sending incremental file list