Error rpc failed curl 18 transfer closed with outstanding read data remaining

Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.
  • Broken pipe errors on git push

    • Increase the POST buffer size in Git
    • RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
    • Check your SSH configuration
    • Running a git repack
    • Upgrade your Git client
  • ssh_exchange_identification error
  • Timeout during git push / git pull
  • git clone over HTTP fails with transfer closed with outstanding read data remaining error
  • Password expired error on Git fetch via SSH for LDAP user
  • Error on Git fetch: “HTTP Basic: Access Denied”

Sometimes things don’t work the way they should or as you might expect when
you’re using Git. Here are some tips on troubleshooting and resolving issues
with Git.

Broken pipe errors on git push

‘Broken pipe’ errors can occur when attempting to push to a remote repository.
When pushing you usually see:

Write failed: Broken pipe
fatal: The remote end hung up unexpectedly

To fix this issue, here are some possible solutions.

Increase the POST buffer size in Git

If you’re using Git over HTTP instead of SSH, you can try increasing the POST buffer size in Git
configuration.

Example of an error during a clone:
fatal: pack has bad object at offset XXXXXXXXX: inflate returned -5

Open a terminal and enter:

git config http.postBuffer 52428800

The value is specified in bytes, so in the above case the buffer size has been
set to 50 MB. The default is 1 MB.

RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)

This problem may be caused by a slow internet connection. If you use Git over HTTP
instead of SSH, try one of these fixes:

  • Increase the POST buffer size in the Git configuration with git config http.postBuffer 52428800.
  • Switch to the HTTP/1.1 protocol with git config http.version HTTP/1.1.

If neither approach fixes the error, you may need a different internet service provider.

Check your SSH configuration

If pushing over SSH, first check your SSH configuration as ‘Broken pipe’
errors can sometimes be caused by underlying issues with SSH (such as
authentication). Make sure that SSH is correctly configured by following the
instructions in the SSH troubleshooting documentation.

If you’re a GitLab administrator with server access, you can also prevent
session timeouts by configuring SSH keep-alive on the client or the server.

To configure SSH on the client side:

  • On UNIX, edit ~/.ssh/config (create the file if it doesn’t exist) and
    add or edit:

    Host your-gitlab-instance-url.com
      ServerAliveInterval 60
      ServerAliveCountMax 5
    
  • On Windows, if you are using PuTTY, go to your session properties, then
    navigate to “Connection” and under “Sending of null packets to keep
    session active”, set Seconds between keepalives (0 to turn off) to 60.

To configure SSH on the server side, edit /etc/ssh/sshd_config and add:

ClientAliveInterval 60
ClientAliveCountMax 5

Running a git repack

If ‘pack-objects’ type errors are also being displayed, you can try to
run a git repack before attempting to push to the remote repository again:

Upgrade your Git client

In case you’re running an older version of Git (< 2.9), consider upgrading
to >= 2.9 (see Broken pipe when pushing to Git repository).

ssh_exchange_identification error

Users may experience the following error when attempting to push or pull
using Git over SSH:

Please make sure you have the correct access rights
and the repository exists.
...
ssh_exchange_identification: read: Connection reset by peer
fatal: Could not read from remote repository.

or

ssh_exchange_identification: Connection closed by remote host
fatal: The remote end hung up unexpectedly

or

kex_exchange_identification: Connection closed by remote host
Connection closed by x.x.x.x port 22

This error usually indicates that SSH daemon’s MaxStartups value is throttling
SSH connections. This setting specifies the maximum number of concurrent, unauthenticated
connections to the SSH daemon. This affects users with proper authentication
credentials (SSH keys) because every connection is ‘unauthenticated’ in the
beginning. The default value is 10.

Increase MaxStartups on the GitLab server
by adding or modifying the value in /etc/ssh/sshd_config:

100:30:200 means up to 100 SSH sessions are allowed without restriction,
after which 30% of connections are dropped until reaching an absolute maximum of 200.

After you modify the value of MaxStartups, check for any errors in the configuration.

sudo sshd -t -f /etc/ssh/sshd_config

If the configuration check runs without errors, it should be safe to restart the
SSH daemon for the change to take effect.

# Debian/Ubuntu
sudo systemctl restart ssh

# CentOS/RHEL
sudo service sshd restart

Timeout during git push / git pull

If pulling/pushing from/to your repository ends up taking more than 50 seconds,
a timeout is issued. It contains a log of the number of operations performed
and their respective timings, like the example below:

remote: Running checks for branch: master
remote: Scanning for LFS objects... (153ms)
remote: Calculating new repository size... (cancelled after 729ms)

This could be used to further investigate what operation is performing poorly
and provide GitLab with more information on how to improve the service.

git clone over HTTP fails with transfer closed with outstanding read data remaining error

Sometimes, when cloning old or large repositories, the following error is thrown:

error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

This is a common problem with Git itself, due to its inability to handle large files or large quantities of files.
Git LFS was created to work around this problem; however, even it has limitations. It’s usually due to one of these reasons:

  • The number of files in the repository.
  • The number of revisions in the history.
  • The existence of large files in the repository.

The root causes vary, so multiple potential solutions exist, and you may need to
apply more than one:

  • If this error occurs when cloning a large repository, you can
    decrease the cloning depth
    to a value of 1. For example:

  • You can increase the
    http.postBuffer
    value in your local Git configuration from the default 1 MB value to a value greater
    than the repository size. For example, if git clone fails when cloning a 500 MB
    repository, you should set http.postBuffer to 524288000:

    # Set the http.postBuffer size, in bytes
    git config http.postBuffer 524288000
    
  • You can increase the http.postBuffer on the server side:

    1. Modify the GitLab instance’s
      gitlab.rb file:

      gitaly['gitconfig'] = [
        # Set the http.postBuffer size, in bytes
        {key: "http.postBuffer", value: "524288000"},
      ]
      
    2. After applying this change, apply the configuration change:

      sudo gitlab-ctl reconfigure
      

For example, if a repository has a very long history and no large files, changing
the depth should fix the problem. However, if a repository has very large files,
even a depth of 1 may be too large, thus requiring the postBuffer change.
If you increase your local postBuffer but the NGINX value on the backend is still
too small, the error persists.

Modifying the server is not always an option, and introduces more potential risk.
Attempt local changes first.

Password expired error on Git fetch via SSH for LDAP user

If git fetch returns this HTTP 403 Forbidden error on a self-managed instance of
GitLab, the password expiration date (users.password_expires_at) for this user in the
GitLab database is a date in the past:

Your password expired. Please access GitLab from a web browser to update your password.

Requests made with a SSO account and where password_expires_at is not null
return this error:

"403 Forbidden - Your password expired. Please access GitLab from a web browser to update your password."

To resolve this issue, you can update the password expiration by either:

  • Using the gitlab-rails console:

    gitlab-rails console
    user.update!(password_expires_at: nil)
    
  • Using gitlab-psql:

     # gitlab-psql
     UPDATE users SET password_expires_at = null WHERE username='<USERNAME>';
    

The bug was reported in this issue.

Error on Git fetch: “HTTP Basic: Access Denied”

If you receive an HTTP Basic: Access denied error when using Git over HTTP(S),
refer to the two-factor authentication troubleshooting guide.

  • Broken pipe errors on git push

    • Increase the POST buffer size in Git
    • RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
    • Check your SSH configuration
    • Running a git repack
    • Upgrade your Git client
  • ssh_exchange_identification error
  • Timeout during git push / git pull
  • git clone over HTTP fails with transfer closed with outstanding read data remaining error
  • Password expired error on Git fetch via SSH for LDAP user
  • Error on Git fetch: “HTTP Basic: Access Denied”

Sometimes things don’t work the way they should or as you might expect when
you’re using Git. Here are some tips on troubleshooting and resolving issues
with Git.

Broken pipe errors on git push

‘Broken pipe’ errors can occur when attempting to push to a remote repository.
When pushing you usually see:

Write failed: Broken pipe
fatal: The remote end hung up unexpectedly

To fix this issue, here are some possible solutions.

Increase the POST buffer size in Git

If you’re using Git over HTTP instead of SSH, you can try increasing the POST buffer size in Git
configuration.

Example of an error during a clone:
fatal: pack has bad object at offset XXXXXXXXX: inflate returned -5

Open a terminal and enter:

git config http.postBuffer 52428800

The value is specified in bytes, so in the above case the buffer size has been
set to 50 MB. The default is 1 MB.

RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)

This problem may be caused by a slow internet connection. If you use Git over HTTP
instead of SSH, try one of these fixes:

  • Increase the POST buffer size in the Git configuration with git config http.postBuffer 52428800.
  • Switch to the HTTP/1.1 protocol with git config http.version HTTP/1.1.

If neither approach fixes the error, you may need a different internet service provider.

Check your SSH configuration

If pushing over SSH, first check your SSH configuration as ‘Broken pipe’
errors can sometimes be caused by underlying issues with SSH (such as
authentication). Make sure that SSH is correctly configured by following the
instructions in the SSH troubleshooting documentation.

If you’re a GitLab administrator with server access, you can also prevent
session timeouts by configuring SSH keep-alive on the client or the server.

To configure SSH on the client side:

  • On UNIX, edit ~/.ssh/config (create the file if it doesn’t exist) and
    add or edit:

    Host your-gitlab-instance-url.com
      ServerAliveInterval 60
      ServerAliveCountMax 5
    
  • On Windows, if you are using PuTTY, go to your session properties, then
    navigate to “Connection” and under “Sending of null packets to keep
    session active”, set Seconds between keepalives (0 to turn off) to 60.

To configure SSH on the server side, edit /etc/ssh/sshd_config and add:

ClientAliveInterval 60
ClientAliveCountMax 5

Running a git repack

If ‘pack-objects’ type errors are also being displayed, you can try to
run a git repack before attempting to push to the remote repository again:

Upgrade your Git client

In case you’re running an older version of Git (< 2.9), consider upgrading
to >= 2.9 (see Broken pipe when pushing to Git repository).

ssh_exchange_identification error

Users may experience the following error when attempting to push or pull
using Git over SSH:

Please make sure you have the correct access rights
and the repository exists.
...
ssh_exchange_identification: read: Connection reset by peer
fatal: Could not read from remote repository.

or

ssh_exchange_identification: Connection closed by remote host
fatal: The remote end hung up unexpectedly

or

kex_exchange_identification: Connection closed by remote host
Connection closed by x.x.x.x port 22

This error usually indicates that SSH daemon’s MaxStartups value is throttling
SSH connections. This setting specifies the maximum number of concurrent, unauthenticated
connections to the SSH daemon. This affects users with proper authentication
credentials (SSH keys) because every connection is ‘unauthenticated’ in the
beginning. The default value is 10.

Increase MaxStartups on the GitLab server
by adding or modifying the value in /etc/ssh/sshd_config:

100:30:200 means up to 100 SSH sessions are allowed without restriction,
after which 30% of connections are dropped until reaching an absolute maximum of 200.

After you modify the value of MaxStartups, check for any errors in the configuration.

sudo sshd -t -f /etc/ssh/sshd_config

If the configuration check runs without errors, it should be safe to restart the
SSH daemon for the change to take effect.

# Debian/Ubuntu
sudo systemctl restart ssh

# CentOS/RHEL
sudo service sshd restart

Timeout during git push / git pull

If pulling/pushing from/to your repository ends up taking more than 50 seconds,
a timeout is issued. It contains a log of the number of operations performed
and their respective timings, like the example below:

remote: Running checks for branch: master
remote: Scanning for LFS objects... (153ms)
remote: Calculating new repository size... (cancelled after 729ms)

This could be used to further investigate what operation is performing poorly
and provide GitLab with more information on how to improve the service.

git clone over HTTP fails with transfer closed with outstanding read data remaining error

Sometimes, when cloning old or large repositories, the following error is thrown:

error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

This is a common problem with Git itself, due to its inability to handle large files or large quantities of files.
Git LFS was created to work around this problem; however, even it has limitations. It’s usually due to one of these reasons:

  • The number of files in the repository.
  • The number of revisions in the history.
  • The existence of large files in the repository.

The root causes vary, so multiple potential solutions exist, and you may need to
apply more than one:

  • If this error occurs when cloning a large repository, you can
    decrease the cloning depth
    to a value of 1. For example:

  • You can increase the
    http.postBuffer
    value in your local Git configuration from the default 1 MB value to a value greater
    than the repository size. For example, if git clone fails when cloning a 500 MB
    repository, you should set http.postBuffer to 524288000:

    # Set the http.postBuffer size, in bytes
    git config http.postBuffer 524288000
    
  • You can increase the http.postBuffer on the server side:

    1. Modify the GitLab instance’s
      gitlab.rb file:

      gitaly['gitconfig'] = [
        # Set the http.postBuffer size, in bytes
        {key: "http.postBuffer", value: "524288000"},
      ]
      
    2. After applying this change, apply the configuration change:

      sudo gitlab-ctl reconfigure
      

For example, if a repository has a very long history and no large files, changing
the depth should fix the problem. However, if a repository has very large files,
even a depth of 1 may be too large, thus requiring the postBuffer change.
If you increase your local postBuffer but the NGINX value on the backend is still
too small, the error persists.

Modifying the server is not always an option, and introduces more potential risk.
Attempt local changes first.

Password expired error on Git fetch via SSH for LDAP user

If git fetch returns this HTTP 403 Forbidden error on a self-managed instance of
GitLab, the password expiration date (users.password_expires_at) for this user in the
GitLab database is a date in the past:

Your password expired. Please access GitLab from a web browser to update your password.

Requests made with a SSO account and where password_expires_at is not null
return this error:

"403 Forbidden - Your password expired. Please access GitLab from a web browser to update your password."

To resolve this issue, you can update the password expiration by either:

  • Using the gitlab-rails console:

    gitlab-rails console
    user.update!(password_expires_at: nil)
    
  • Using gitlab-psql:

     # gitlab-psql
     UPDATE users SET password_expires_at = null WHERE username='<USERNAME>';
    

The bug was reported in this issue.

Error on Git fetch: “HTTP Basic: Access Denied”

If you receive an HTTP Basic: Access denied error when using Git over HTTP(S),
refer to the two-factor authentication troubleshooting guide.

Sometimes things don’t work the way they should or as you might expect when
you’re using Git. Here are some tips on troubleshooting and resolving issues
with Git.

Broken pipe errors on git push

‘Broken pipe’ errors can occur when attempting to push to a remote repository.
When pushing you usually see:

Write failed: Broken pipe
fatal: The remote end hung up unexpectedly

To fix this issue, here are some possible solutions.

Increase the POST buffer size in Git

If you’re using Git over HTTP instead of SSH, you can try increasing the POST buffer size in Git’s
configuration.

Example of an error during a clone:
fatal: pack has bad object at offset XXXXXXXXX: inflate returned -5

Open a terminal and enter:

git config http.postBuffer 52428800

The value is specified in bytes, so in the above case the buffer size has been
set to 50MB. The default is 1MB.

Check your SSH configuration

If pushing over SSH, first check your SSH configuration as ‘Broken pipe’
errors can sometimes be caused by underlying issues with SSH (such as
authentication). Make sure that SSH is correctly configured by following the
instructions in the SSH troubleshooting documentation.

If you’re a GitLab administrator with server access, you can also prevent
session timeouts by configuring SSH keep-alive on the client or the server.

NOTE:
Configuring both the client and the server is unnecessary.

To configure SSH on the client side:

  • On UNIX, edit ~/.ssh/config (create the file if it doesn’t exist) and
    add or edit:

    Host your-gitlab-instance-url.com
      ServerAliveInterval 60
      ServerAliveCountMax 5
  • On Windows, if you are using PuTTY, go to your session properties, then
    navigate to «Connection» and under «Sending of null packets to keep
    session active», set Seconds between keepalives (0 to turn off) to 60.

To configure SSH on the server side, edit /etc/ssh/sshd_config and add:

ClientAliveInterval 60
ClientAliveCountMax 5

Running a git repack

If ‘pack-objects’ type errors are also being displayed, you can try to
run a git repack before attempting to push to the remote repository again:

git repack
git push

Upgrade your Git client

In case you’re running an older version of Git (< 2.9), consider upgrading
to >= 2.9 (see Broken pipe when pushing to Git repository).

ssh_exchange_identification error

Users may experience the following error when attempting to push or pull
using Git over SSH:

Please make sure you have the correct access rights
and the repository exists.
...
ssh_exchange_identification: read: Connection reset by peer
fatal: Could not read from remote repository.

or

ssh_exchange_identification: Connection closed by remote host
fatal: The remote end hung up unexpectedly

This error usually indicates that SSH daemon’s MaxStartups value is throttling
SSH connections. This setting specifies the maximum number of concurrent, unauthenticated
connections to the SSH daemon. This affects users with proper authentication
credentials (SSH keys) because every connection is ‘unauthenticated’ in the
beginning. The default value is 10.

Increase MaxStartups on the GitLab server
by adding or modifying the value in /etc/ssh/sshd_config:

MaxStartups 100:30:200

100:30:200 means up to 100 SSH sessions are allowed without restriction,
after which 30% of connections are dropped until reaching an absolute maximum of 200.

Once configured, restart the SSH daemon for the change to take effect.

# Debian/Ubuntu
sudo systemctl restart ssh

# CentOS/RHEL
sudo service sshd restart

Timeout during git push / git pull

If pulling/pushing from/to your repository ends up taking more than 50 seconds,
a timeout is issued. It contains a log of the number of operations performed
and their respective timings, like the example below:

remote: Running checks for branch: master
remote: Scanning for LFS objects... (153ms)
remote: Calculating new repository size... (cancelled after 729ms)

This could be used to further investigate what operation is performing poorly
and provide GitLab with more information on how to improve the service.

git clone over HTTP fails with transfer closed with outstanding read data remaining error

Sometimes, when cloning old or large repositories, the following error is thrown:

error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

This is a common problem with Git itself, due to its inability to handle large files or large quantities of files.
Git LFS was created to work around this problem; however, even it has limitations. It’s usually due to one of these reasons:

  • The number of files in the repository.
  • The number of revisions in the history.
  • The existence of large files in the repository.

The root causes vary, so multiple potential solutions exist, and you may need to
apply more than one:

  • If this error occurs when cloning a large repository, you can
    decrease the cloning depth
    to a value of 1. For example:

    variables:
      GIT_DEPTH: 1
  • You can increase the
    http.postBuffer
    value in your local Git configuration from the default 1 MB value to a value greater
    than the repository size. For example, if git clone fails when cloning a 500 MB
    repository, you should set http.postBuffer to 524288000:

    # Set the http.postBuffer size, in bytes
    git config http.postBuffer 524288000
  • You can increase the http.postBuffer on the server side:

    1. Modify the GitLab instance’s
      gitlab.rb file:

      omnibus_gitconfig['system'] = {
        # Set the http.postBuffer size, in bytes
        "http" => ["postBuffer" => 524288000]
      }
    2. After applying this change, apply the configuration change:

      sudo gitlab-ctl reconfigure

For example, if a repository has a very long history and no large files, changing
the depth should fix the problem. However, if a repository has very large files,
even a depth of 1 may be too large, thus requiring the postBuffer change.
If you increase your local postBuffer but the NGINX value on the backend is still
too small, the error persists.

Modifying the server is not always an option, and introduces more potential risk.
Attempt local changes first.

Issue

I have been trying to git push some files into a repo I just created, but it keeps failing.

I’ve already tried changing http.version from HTTP/2 to HTTP/1.1 (I’ve tried both) and I also increased the http.postBuffer and http.maxRequestBuffer size. Most fixes I found online recommend changing one or both of these.

The largest file in my local working directory is 24.6 MB (excluding a .pack file) so I don’t have to use Git LFS.

Here is some of the output of git config --list:

diff.astextplain.textconv=astextplain
filter.lfs.clean=git-lfs clean -- %f
filter.lfs.smudge=git-lfs smudge -- %f
filter.lfs.process=git-lfs filter-process
filter.lfs.required=true
http.sslbackend=openssl
http.sslcainfo=C:/Program Files/Git/mingw64/ssl/certs/ca-bundle.crt
core.autocrlf=true
core.fscache=true
core.symlinks=false
pull.rebase=false
credential.helper=manager-core
credential.https://dev.azure.com.usehttppath=true
init.defaultbranch=master
core.editor="C:UsersusernameAppDataLocalProgramsMicrosoft VS CodeCode.exe"
--wait
core.longpaths=true
core.compression=0
gui.recentrepo=C:/Users/username/path/to/myRepo
filter.lfs.clean=git-lfs clean -- %f
filter.lfs.smudge=git-lfs smudge -- %f
filter.lfs.process=git-lfs filter-process
filter.lfs.required=true
...
http.postbuffer=30000000
http.version=HTTP/1.1
http.maxrequestbuffer=300000000
credential.helper=wincred
core.bare=false
core.repositoryformatversion=0
core.filemode=false
core.symlinks=false
core.ignorecase=true
core.logallrefupdates=true
remote.origin.url=https://github.com/username/myRepo.git
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
remote.origin.pushurl=https://github.com/username/myRepo.git
branch.main.remote=origin
branch.main.merge=refs/heads/main

And here is the output after git push:

Enumerating objects: 177, done.
Counting objects: 100% (177/177), done.
Delta compression using up to 8 threads
Compressing objects: 100% (168/168), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
send-pack: unexpected disconnect while reading sideband packet
Writing objects: 100% (177/177), 444.10 MiB | 612.00 KiB/s, done.
Total 177 (delta 5), reused 177 (delta 5), pack-reused 0
fatal: the remote end hung up unexpectedly
Everything up-to-date

I am using no antivirus/firewall other than Windows Defender.

Please help.

Solution

In my situation, I was trying to push too large a payload. According to this article, «Because github does not allow a single push larger than 2GB you can push it in batches. Split the push into part1, part2…»

My solution was to break up the payload into several commits and push a couple at a time. Depending on your situation, you could try writing a shell script to push incrementally up to HEAD if you have commit history on your repo. I didn’t end up doing that but instead, I made ample use of matching patterns with multiline git add commands to select the files I wanted for each push.

Answered By – michen00

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Sometimes the cURL error ‘cURL 18 transfer closed with outstanding read data remaining’ occurs while retrieving data from a URL using a cURL.

Here at Ibmi Media, as part of our Server Management Services, we regularly help our Customers to fix cURL related errors.

In this context, we shall look into the causes of this error and how to resolve it.

Nature of cURL error ‘cURL 18 transfer closed with outstanding read data remaining’ ?

As previously stated, sometimes, the file we transfer will be smaller or larger than expected. Such cases arise when the server initially reports an expected transfer size, and then delivers data that doesn’t match the previously sent size.

Basically, this error is related to content-length.

cURL error 18 can be described as below:

CURLE_PARTIAL_FILE

It means a Partial file i.e. only a part of the file was transferred.

Different causes and fixes for cURL error ‘cURL 18 transfer closed with outstanding read data remaining’ ?

Now let’s see the different causes for this error. Also, we shall see how to fix it.

1. An incorrect Content-Length header was sent by the peer.

If an incorrect Content-Length header is been sent then the best option is to allow the cURL to set the length by itself. This will avoid the issues that might arise after setting the wrong size.

Moreover, we can fix this by suppressing the ‘Expect: 100-continue’ header that cURL usually sends;

curl_setopt($curl, CURLOPT_HTTPHEADER, array(‘Expect:’));

2. The connection gets timed-out as keep-alives were not sent to keep the connection going on.

To fix this issue, add the –keepalive-time.

For instance,

–keepalive-time 2

This option will set a time a connection will need to remain idle before sending the keepalive probes and the time between individual keepalive probes. However, if we use –no-keepalive then this option has no effect.

In case, if we use this option several times then the last one will be used. If you don’t specify the value then it defaults to 60 seconds.

In PHP cURL, the –keepalive-time option is available from the PHP version 5.5. You can use it as follows:

curl_setopt($connection, CURLOPT_TCP_KEEPALIVE, 1);
curl_setopt($connection, CURLOPT_TCP_KEEPIDLE, 2);

[Need urgent assistance in fixing cURL errors? – We are here to help you.]

Usually, the cURL error ‘cURL 18 transfer closed with outstanding read data remaining’ occurs while retrieving data from a URL using a cURL.

Here at Bobcares, we have seen several such cURL related errors as part of our Server Management Services for web hosts and online service providers.

Today we’ll take a look at the causes for this error and see the fix.

More about ‘cURL 18 transfer closed with outstanding read data remaining’ error

Sometimes, the file we transfer will be smaller or larger than expected. Such cases arise when the server initially reports an expected transfer size, and then delivers data that doesn’t match the previously sent size.

In short, this error is related to content-length.

cURL error 18 can be described as below:

CURLE_PARTIAL_FILE

It means a Partial file i.e. only a part of the file was transferred.

Different causes and fixes for ‘cURL 18 transfer closed with outstanding read data remaining’ error

Now let’s see the different causes for this error. Also, we shall see how our Support Engineers fix it.

1. An incorrect Content-Length header was sent by the peer.

If an incorrect Content-Length header is been sent then the best option is to allow the cURL to set the length by itself. This will avoid the issues that might arise after setting the wrong size.

Moreover, we can fix this by suppressing the ‘Expect: 100-continue’ header that cURL usually sends.

curl_setopt($curl, CURLOPT_HTTPHEADER, array(‘Expect:’));

2. The connection gets timed-out as keep-alives were not sent to keep the connection going on.

To fix this issue, add the –keepalive-time.

For instance,

–keepalive-time 2

This option will set a time a connection will need to remain idle before sending the keepalive probes and the time between individual keepalive probes. However, if we use –no-keepalive then this option has no effect.

In case, if we use this option several times then the last one will be used. If you don’t specify the value then it defaults to 60 seconds.

In PHP cURL, the –keepalive-time option is available from the PHP version 5.5. You can use it as follows:

curl_setopt($connection, CURLOPT_TCP_KEEPALIVE, 1);
curl_setopt($connection, CURLOPT_TCP_KEEPIDLE, 2);

[Need any further assistance in fixing cURL errors? – We are here to help you.]

Conclusion

In short, this error occurs while retrieving data from a URL using a cURL. Today, we saw how our Support Engineers fix this error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

Понравилась статья? Поделить с друзьями:
  • Error root this bundle is not valid
  • Error root device mounted successfully but sbin init does not exist
  • Error romon is not enabled on the router mikrotik
  • Error romon agent not found add it to managed list
  • Error rollback framework could not be initialized installation aborted teamviewer