Error writing stdout broken pipe

fatal: sha1 file ‘ ‘ write error: Broken Pipe #2428 Comments git push via SSH and returned this error: The text was updated successfully, but these errors were encountered: Can you run with GIT_TRACE=1 ? That looks more like a Git issue, since the Git LFS upload succeeded. It could also be a local […]

Содержание

  1. fatal: sha1 file ‘ ‘ write error: Broken Pipe #2428
  2. Comments
  3. 🪠 Handle ‘broken pipe’ error in Go
  4. Reproduce the broken pipe error
  5. Handle the broken pipe error
  6. Difference between broken pipe and connection reset by peer
  7. How to fix Broken Pipe Error in Linux
  8. Inspecting the Command
  9. Fixing a Problem with File System
  10. Afterwords
  11. write /dev/stdout: broken pipe #42
  12. Comments

fatal: sha1 file ‘ ‘ write error: Broken Pipe #2428

git push via SSH and returned this error:

The text was updated successfully, but these errors were encountered:

Can you run with GIT_TRACE=1 ? That looks more like a Git issue, since the Git LFS upload succeeded. It could also be a local SSH timeout issue, as LFS will run a short SSH command before the upload.

@technoweenie The total upload time of LFS objects was 22 minute, is there some SSH connections during the upload of LFS? Will this be the cause of SSH timeout? I ran GIT_TRACE=1 and the error was the same, I am sorry I did not record it. And then I ran git push —no-verify , the push went successfully with everything pushed completely.

is there some SSH connections during the upload of LFS? Will this be the cause of SSH timeout?

Git LFS calls ssh git@your-host.com git-lfs-authenticate . to get temporary auth for the LFS API calls. The ssh command runs and exits cleanly, so I think it’s up to your local ssh config. If you use an HTTPS git remote, or configure remote..lfsurl , you won’t have this issue.

For example, if you’re using GitHub, you could set it up like this:

This way Git will use SSH, while LFS will use HTTPS. Seems complicated though, but it’s an option.

I did a test: uninstalled the LFS and added a sleep time (20 minutes) to the pre-push hook, which resulted in an SSH timeout. As follows:

Is the case of LFS similar? I mean: when doing a git push with SSH , LFS will use too much time during that time, so it causes an SSH timeout issue.

so it causes an SSH timeout issue.

I agree with this. I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

I think there are two things we could do here:

  1. Increase the keep-alive time of your ssh-agent (this may be out of your control if the remote end closes, which it appears to do in the comment that you posted above).
  2. Teach LFS to send a keep alive byte on the SSH connection that Git opens, similar to git/git@8355868.

That commit only works during receive_pack() operations, but this is a ‘push’, so it’s calling send_pack() . We’d need some way to get access to the SSH connection that Git is opening, or teach Git the same receive.keepAlive option for send_pack operations.

@peff what do you think?

I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

Right, modulo s/LFS/Git/ in the first sentence (which I think then matches the rest of your comment). We have to do it that way because Git can’t kick off the pre-push hook until it knows what’s going to be pushed, and it doesn’t know that until it makes the ssh session and gets the server’s ref advertisement. So the server is waiting for the client to send the list of refupdate commands, during which the ssh connection is sitting idle.

It’s not clear to me what is killing the ssh connection. It could be that something at the network level is unhappy with the idle TCP connection. This could be GitHub-side reverse proxies, or just some firewall in the path. Increasing the frequency of ssh keepalives could help here.

But it could also be an application-level timeout above the ssh layer. Git by default doesn’t have any timeouts waiting for the incoming packfile, but not all servers terminate directly at actual C Git. GitHub terminates at a custom proxy layer with its own timeouts, I’m not sure what JGit does, and I have no clue what other hosts like GitLab or Atlassian do. An ssh keep-alive won’t help there; you’d need something to tell the application layer that we’re still chugging.

The right solution is to have Git send application-level keepalives while the pre-push hook is running, to tell the other side that yes, we really are doing useful work and it should keep waiting. But implementing that is going to be hard. The existing keep-alives could be hacked into the protocol only because the sender in those cases was sending sideband-encoded data. So we can send empty sideband-0 packets.

But in the phase that would need keep-alives here, the next thing to happen is the client sending the ref-update pktlines. Those are in pktline format, but there’s no sideband byte. And while technically a server can distinguish between a flush packet («0000») and an empty pktline («0004»), existing implementations don’t (and wouldn’t know what to do with an empty pktline at this stage anyway).

So you’d need a protocol extension to Git, that would work something like:

The server’s initial advertisement adds a new capability, client-keepalive .

New clients recognize that, and when used with a capable server, mention client-keepalive to tell the server they will use it.

While the pre-push hook runs, the Git client would then generate keepalive packets as part of the command-list , which the server would just throw away.

The only option I could come up with to hack a noop into the existing protocol was by sending meaningless delete refs/does/not/exist commands. But besides being a horrific hack in the first place, it also generates «warning: deleting a non-existent ref» messages. 😉

So I don’t think there’s really anything for LFS to do here. The issue is in Git, and would apply to other long-running pre-push hooks, too. It actually applies to sending the pack itself, too. If you have a large or badly packed repo, you could stall on pack-objects preparing the pack before it starts sending any bytes (this is pretty rare in practice, and is usually fixed by running git gc on the client side). Possibly a new keepalive capability should also imply that the client can send keepalives between the ref update and the start of the pack contents.

In the meantime, the obvious workarounds are:

If you have a big LFS push, do it separately beforehand, which would make the pre-push step largely a noop.

Use a protocol for the Git push that doesn’t keep a connection open. Git-over-http is stateless, and there’s no open connection while the hook runs.

Источник

🪠 Handle ‘broken pipe’ error in Go

Please consider supporting us by disabling your ad blocker

The broken pipe is a TCP/IP error occurring when you write to a stream where the other end (the peer) has closed the underlying connection. The first write to the closed connection causes the peer to reply with an RST packet indicating that the connection should be terminated immediately. The second write to the socket that has already received the RST causes the broken pipe error. To detect the broken pipe in Go, check if the error returned by the peer is equal to syscall.EPIPE . Usually, this error can be seen when the server crashes while the client is sending data to it.

Reproduce the broken pipe error

In the following example, we reproduce the broken pipe error by creating a server and client that do the following:

  • the server reads a single byte and then closes the connection
  • the client sends three bytes with an interval of one second between them

The server receives the first client byte and closes the connection. The next byte of the client sent to the closed connection causes the server to reply with an RST packet. The socket that received the RST will return the broken pipe error when more bytes are sent to it. This is what happens when the client sends the last byte to the server.

Handle the broken pipe error

To handle the broken pipe , you need to check if the error returned from the other end of the connection is an instance of syscall.EPIPE . In the example above, we perform this check using the errors.Is() function and print the message «This is broken pipe error» if it occurs. The broken pipe can be seen on either the client or server side, depending on which one is trying to write to the closed connection. Typically there is no need to handle it in any special way since it is normal that a connection may be interrupted by either side of the communication. For example, you can ignore the error, log it or reconnect when it occurs.

Difference between broken pipe and connection reset by peer

Usually, you get the broken pipe error when you write to the connection after the RST is sent, and when you read from the connection after the RST instead, you get the connection reset by peer error. Check our article about connection reset by peer error to better understand the difference between these two errors.

Thank you for being on our site 😊. If you like our tutorials and examples, please consider supporting us with a cup of coffee and we’ll turn it into more great Go examples.

Источник

How to fix Broken Pipe Error in Linux

Updated on 5/5/2022 – Have you ever encountered a situation in which you were unable to download any package on your Linux machine ? Or you might have probably seen an error like package not installed? This kind of error can easily be fixed with a command like “sudo apt install –f”. On rare occasions, you may have experienced a broken pipe error.

A pipe in Linux / Unix connects two processes , one of them has read-end of the file and the other one has the write-end of the file. When a process writes to a Pipe, it gets stored in a buffer and gets retrieved by the other process. Broken pipe occurs when a process prematurely exits from either end and the other process has not yet closed the pipe.

Example use case:

A user has just recently reinstalled RVM (Ruby Version Manager) after he performed a fresh install of Ubuntu.

He then opened up the terminal and issued the command:

type rvm | head -1

This issued the following error:

rvm is a function -bash: type: write error: Broken pipe

What happened here is that when the user runs the command type rvm | head -1 , bash has actually executed type rvm in one process and head -1 in another process. The stdout of the type part is connected to the “write” end of a pipe whereas the stdin of the head part is hooked up to the “read” end. Note that the two processes have run concurrently ( at the same time ).

The head -1 process has carried out a read operation of data from stdin , then prints out a single line (as dictated by the -1 option) before exiting, causing therefore the “read” end of the pipe to be closed. Since the rvm function has quite a long data stream (about 11 kB after having been bash parsed and reconstructed), which means that head exits yet at the same time type still has some data to write out (few KB).

Since type is trying to carry out a write operation to a pipe whose other end has therefore been closed – a brokenpipe routine or the write() function that it invoked, will return an EPIPE error which is known as “Broken pipe”.

Inspecting the Command

In most cases, this might not be the case but the first step you should check is whether the command issued was right or not. You should reissue the command and check whether it gets executed or not. You can also try issuing commands like “sudo apt update” and “sudo apt install –f” as these commands are not destructive in nature. If your problem still persists, try rebooting the machine and see whether the problem was resolved or not.

Fixing a Problem with File System

When you have issued the commands mentioned earlier multiple times and you still get the error, check whether the error reads something like “read-only file system” in the terminal output. This may be caused when your boot partition gets mounted as read-only for some reason. The problem could be caused by some faulty software installation when the system decides that it is not safe to write to the drive.

The other cause might be when you try to install something from apt and the installer needs to access some resource in read mode, but cannot perform the read operation properly. It may throw an error like “sudo: cannot mount”. This error occurs because most of the ‘entities’ in Linux are files and in order to read a resource, Linux would need to open that file and read it. If however another process is currently using that resource, then it may not be possible to read the file. Also, when the reading process exits abruptly and does not close the file, it may corrupt the file until the next boot.

If you still cannot access the files even after rebooting, then the problem could be bigger than anticipated. You may have a broken file system. To resolve this issue, you may need a stable Linux environment in order to work on the broken system. The best way to do this is to boot from a Live Linux USB drive and work from it.

This is the right moment to backup all your data. Although the following steps are safe, you should make sure to store your data on a secure device.

Once you boot into a Live USB drive you should start to check for the partitions with a corrupt file system. To do so, issue the following command:

sudo fsck.ext4 -fv /dev/sdaX”

Note that here X stands for the partition that you are trying to scan. Note that this command is for partitions of type ext4. If you have a partition of type ext3 or ext2 you will need to replace the command with “fsck.ext3” and “fsck.ext2” respectively. This will scan your drive and print the output on the terminal (note the -v flag). Alternatively, you can specify a -c flag to surface-scan the drive; it will look for bad sectors on the drive.

Once you have done this, your partition should hopefully been fixed. Now Boot into your machine and issue the command:

sudo mount -o rw,remount /

This will restore the read/write permission on your drive and will therefore solve the broken pipe issue.

Afterwords

You have just seen one solution to resolve the broken pipe issue, but a broken pipe is not a problem, it could be a symptom of a larger problem more often than not. You can have a broken pipe whenever you are trying to read a stream like resource that gets closed prematurely. .

If you like the content, we would appreciate your support by buying us a coffee. Thank you so much for your visit and support.

Источник

write /dev/stdout: broken pipe #42

I wanted to give it a try but when I run source I get an error message saying write /dev/stdout: broken pipe on macOS 10.14.3 with Docker Desktop 2.0.0.3 (Engine: 18.09.2). Therefore dockrun_t3rdf makehtml isn’t available.

The text was updated successfully, but these errors were encountered:

Don’t know what the problem is, but can you try one or more of the following?

  1. Check if Docker Compose workflow works for you: See https://github.com/t3docs/docker-render-documentation#docker-compose This will use the settings set in the docker-compose.yml file so you do not need to redirect the output of show-shell-commands.
  2. Dump the output into a file instead, e.g. (docker run —rm t3docs/render-documentation show-shell-commands) > tmp.sh , then execute that: . ./tmp.sh
  3. Check which shell you are using, run echo «$SHELL» , see https://stackoverflow.com/questions/43417162/which-shell-i-am-using-in-mac.

I use Linux, never had a problem with these commands. Don’t have a Mac available to test, can only guess here. Sorry. The error message indicates that something may be wrong with redirecting the stdout or running the container. However, I believe this should work in a bash shell. You might want to check which shell you are using (as described in 3) and use bash instead.

Please let us know, if any of this works for you, in order to help others.

  1. Try the html-only version: source . In that case, you need to run dockrun_t3rdh makehtml instead (notice the h instead of f in rdh).

Any other error message or something in a logfile that might further help to narrow down the problem?

I found the following snippet in the #typo3-documentation channel on slack:

When I add these lines to my .profile file, it works. Sadly I cannot tell why but maybe it helps someone else. Seems quite similar to your second provided solution. Btw I’m using bash.

Источник

I think the idea is that the SSH connection, which is being opened by LFS to authenticate and then not used for 20 minutes while the objects are uploaded ends up getting closed by your ssh-agent.

Right, modulo s/LFS/Git/ in the first sentence (which I think then matches the rest of your comment). We have to do it that way because Git can’t kick off the pre-push hook until it knows what’s going to be pushed, and it doesn’t know that until it makes the ssh session and gets the server’s ref advertisement. So the server is waiting for the client to send the list of refupdate commands, during which the ssh connection is sitting idle.

It’s not clear to me what is killing the ssh connection. It could be that something at the network level is unhappy with the idle TCP connection. This could be GitHub-side reverse proxies, or just some firewall in the path. Increasing the frequency of ssh keepalives could help here.

But it could also be an application-level timeout above the ssh layer. Git by default doesn’t have any timeouts waiting for the incoming packfile, but not all servers terminate directly at actual C Git. GitHub terminates at a custom proxy layer with its own timeouts, I’m not sure what JGit does, and I have no clue what other hosts like GitLab or Atlassian do. An ssh keep-alive won’t help there; you’d need something to tell the application layer that we’re still chugging.

The right solution is to have Git send application-level keepalives while the pre-push hook is running, to tell the other side that yes, we really are doing useful work and it should keep waiting. But implementing that is going to be hard. The existing keep-alives could be hacked into the protocol only because the sender in those cases was sending sideband-encoded data. So we can send empty sideband-0 packets.

But in the phase that would need keep-alives here, the next thing to happen is the client sending the ref-update pktlines. Those are in pktline format, but there’s no sideband byte. And while technically a server can distinguish between a flush packet («0000») and an empty pktline («0004»), existing implementations don’t (and wouldn’t know what to do with an empty pktline at this stage anyway).

So you’d need a protocol extension to Git, that would work something like:

  1. The server’s initial advertisement adds a new capability, client-keepalive.

  2. New clients recognize that, and when used with a capable server, mention client-keepalive to tell the server they will use it.

  3. While the pre-push hook runs, the Git client would then generate keepalive packets as part of the command-list, which the server would just throw away.

The only option I could come up with to hack a noop into the existing protocol was by sending meaningless delete refs/does/not/exist commands. But besides being a horrific hack in the first place, it also generates «warning: deleting a non-existent ref» messages. ;)

So I don’t think there’s really anything for LFS to do here. The issue is in Git, and would apply to other long-running pre-push hooks, too. It actually applies to sending the pack itself, too. If you have a large or badly packed repo, you could stall on pack-objects preparing the pack before it starts sending any bytes (this is pretty rare in practice, and is usually fixed by running git gc on the client side). Possibly a new keepalive capability should also imply that the client can send keepalives between the ref update and the start of the pack contents.

In the meantime, the obvious workarounds are:

  1. If you have a big LFS push, do it separately beforehand, which would make the pre-push step largely a noop.

  2. Use a protocol for the Git push that doesn’t keep a connection open. Git-over-http is stateless, and there’s no open connection while the hook runs.

This is one that’s caught me out on more than one occasion now, hopefully by blogging it I won’t forget about it again quite so soon…

When attempting to push to a git repository over HTTP, you may experience a “broken pipe” error along the lines of the following:

Counting objects: 14466, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3746/3746), done.
error: RPC failed; result=22, HTTP code = 400
fatal: The remote end hung up unexpectedly
Writing objects: 100% (14466/14466), 104.13 MiB | 31.34 MiB/s, done.
Total 14466 (delta 10927), reused 13812 (delta 10474)
fatal: The remote end hung up unexpectedly
fatal: expected ok/error, helper said '2004�Ȍ/↓/Ɠyb��Nj}↑z��"#7‼.m���+x9`>��☼�uhh_������м5���§��z���W?�^&��͙mQM��a`Q�C���Z'

fatal: write error: Broken pipe

This error occurs when the amount of data you’re trying to push in one go exceeds git’s http post buffer which is defined in the docs as:

Maximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system. For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests.

Whilst most day to day pushes are likely to be under 1MB, if you’re pushing a large repository over HTTP for the first time, there’s a good chance you’ll exceed this limit, resulting in the above error.

Increasing the buffer is a simple config change to set the new size in bytes (a value which will obviously need to exceed the size of the push that’s erroring):

git config http.postBuffer 209715200

One more thing – even after increasing git’s buffer I was still getting fatal errors. If you also host the destination repository, you need to make sure the server it’s running on doesn’t have a limit to the size of POST requests it will accept – a configuration change may be required to that as well. After I bumped up my max request length, everything worked as expected.

This entry was posted in Fixes and tagged git. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Hello. So to be honest I have no idea what this software is used for.

I am a translator that works in QT Linguist and I am using Sourcetree to pull the newest translation file from Sourcetree into QT Linguist. After my translation is done I am using Sourcetree again to stage and push my translation file into the system. 

Since some days ago I cannot pull anymore. I see there are 4 files to pull but after choosing Pull this happens : 

git -c diff.mnemonicprefix=false -c core.quotepath=false —no-optional-locks fetch origin
Logon failed, use ctrl+c to cancel basic credential prompt.
fatal: Authentication failed for ‘https://companyname.visualstudio.com/VetNet/_git/VetNet/’
Logon failed, use ctrl+c to cancel basic credential prompt.
fatal: Authentication failed for ‘https://companyname.visualstudio.com/VetNet/_git/VetNet/’
Logon failed, use ctrl+c to cancel basic credential prompt.
fatal: Authentication failed for ‘https://companyname.visualstudio.com/VetNet/_git/VetNet/’
Logon failed, use ctrl+c to cancel basic credential prompt.
fatal: Authentication failed for ‘https://companyname.visualstudio.com/VetNet/_git/VetNet/’
Logon failed, use ctrl+c to cancel basic credential prompt.
Completed with errors, see above.

And after those errors there are still 4 files to push, so push was not completed correctly.

As before, the same Microsoft Live windows pops up to enter credentials, but before I just closed the Window without entering anything and  Push / pull were working. 

Someone told me to open terminal and do git pull. Therefore I push CRLT+C in the terminal and I get a window to enter password. 

After I enter the password this happens :

Error: error writing «stdout»:broken pipe.

error writing «stdout»: broken pipe
error writing «stdout»: broken pipe
while executing
«puts $::answer»
(procedure «finish» line 9)
invoked from within
«finish»
invoked from within
«.b.ok invoke»
(«uplevel» body line 1)
invoked from within
«uplevel #0 [list $w invoke]»
(procedure «tk::ButtonUp» line 24)
invoked from within
«tk::ButtonUp .b.ok»
(command bound to event)

Sorry if this is complicated or so, but I have never worked in this program before, I am not a programmer (just simple translator). Any assistance is appreciated. 

Понравилась статья? Поделить с друзьями:
  • Error writing php ini permission denied
  • Error writing permission denied linux
  • Error writing partition miflash
  • Error unterminated procedure maple
  • Error unterminated comment