Содержание
- iperf3: error — unable to send control message: Bad file descriptor #1233
- Comments
- «Bad file descriptor» 5 seconds after specified connection duration #753
- Comments
- Bug Report
- Server listening on 5201
- Regression in 3.1/3.2: iperf server fails after
- Regression in 3.1/3.2: iperf server fails after
- Comments
- Bug Report
- Iperf3 error unable to send control message bad file descriptor
iperf3: error — unable to send control message: Bad file descriptor #1233
ISSUE: iperf3: error — unable to send control message: Bad file descriptor
I’m running network testing on my two systems using the iperf3 command, where the testing is getting successful in some time and it’s failing in some time with the error message «iperf3: error — unable to send control message: Bad file descriptor». I assume the connectivity is the problem so to test the connectivity between both systems, I am running a ping test which is passed. In my next step, I am doing a retry in case of the first attempt failed in the iperf3 test. in my all attempt the results were the same(frequent failure with the error message).
Iperf Version: iperf 3.9 (cJSON 1.7.13)
Command used: iperf3 —client 10.0.0.4 —json
Help me to identify the issue and fix them, Thanks in advance.
The text was updated successfully, but these errors were encountered:
Hi, we might need a bit more information to figure out your issue.
- What operating system are you using?
- Are you immediately retrying the test or is there some kind of sleep?
- Can you try with a more recent version of iperf3; the latest is 3.10.1.
Thank you for your response. please find my inline answers.
- What operating system are you using?
ANS: Its Ubuntu 18.04. - Are you immediately retrying the test or is there some kind of sleep?
ANS: As of now I’m giving some sleep time but again it depends on the time the server starts up(which we don’t check as of now). - Can you try with a more recent version of iperf3; the latest is 3.10.1.
ANS: let me try and give you update.
As per your 2nd question there can be a chance the client start reaching to server as soon as the server starts up, let me try with some check when the server starts and from there ill give some sleep time. But want to know what is the expected sleep time we want to give so that client can reach to server after the server starts up.
Thanks,
Bharath K
That message shows up when the port is not available. Check your firewall settings or the port that you are trying to test.
I’m not mentioning any port along with command also I don’t have any firewall to block, as this is in my own system configured as private network.
If you’re not mentioning any port with the command, by default it is going to need to use port 5001. Assuming that you are using this on your private network, you need to enable port forwarding on your home router.
A ping test only tests that the host is alive and uses an ICMP protocol, what you need to test is if a port is blocked.
does it support UPNP?
The default iperf server port is 5201 which must be open on the server’s firewall.
I am facing the same issue.
What operating system are you using?
Are you immediately retrying the test, or is there some kind of sleep?
Can you try with a more recent version of iperf3; the latest is 3.10.1.
Let me try to get the latest.
The default iperf server port is 5201 which must be open on the server’s firewall.
The server is working fine when tested from another machine.
Server computer is Ubuntu 20.04.4 LTS, x86-64
Client computer is Windows 10 Enterprise 2016 LTSB, v1607
iperf3 fails after exactly 49 seconds, every time, with
iperf3: error — select failed: Bad file descriptor
Running iperf3 3.11 on the Ubuntu machine, built from the tag here on github.
Running the other way around, with the server on windows and the client on ubuntu, no error is experienced, the test can run indefinitely.
Does Windows 10 do any outbound traffic blocking? I’d assume you needed to add a firewall exception to allow the inbound traffic. Not sure that this is the answer, but does the same issue occur with the Windows firewall disabled (or defender or whatever)?
Does Windows 10 do any outbound traffic blocking? I’d assume you needed to add a firewall exception to allow the inbound traffic. Not sure that this is the answer, but does the same issue occur with the Windows firewall disabled (or defender or whatever)?
No, running the same test w10e2016 -> w10e2016 doesn’t have the same issue.
I believe it is a behavior regression post-3.1.3, because all the windows computers are running 3.1.3, because I haven’t gotten around to figuring out how to build iperf3 for windows from source.
To clarify, the test runs for 49 seconds. It works perfectly for 49 seconds, and then terminates unexpectedly.
The command used on the ubuntu machine (3.11) is
iperf3 -s
The command used on the windows machine (3.1.3) is
iperf3 -c testmachine1 -b 64m -t 0
However, if the following commands are used instead,
iperf3 -s
iperf3 -c testmachine1 -b 64m -t 80
Then the test runs for exactly 80 seconds as expected.
Again, running the other way around,
ubuntu: iperf3 -c testmachine2 -b 64m -t 0
windows: iperf3 -s
The test runs indefinitely as expected.
I haven’t tried ubuntu-ubuntu yet, working on it now.
Critical update!
Connecting 3.11 to 3.11 works just fine.
It is a non-trivial versioning issue;
- The only readily available version of iperf3 for Windows is 3.1.3
- The most readily available version of iperf3 for Ubuntu is 3.7 (though 3.11 also exhibits this behavior)
If 3.7 or 3.11 is -s and 3.1.3 is -c , attempting to run an indefinite test -t 0 will fail after 49 seconds.
If 3.7 or 3.11 is -c and 3.1.3 is -s , attempting to run an indefinite test -t 0 will succeed.
If 3.7 or 3.11 is -s and 3.1.3 is -c , attempting to run a limited test -t 80 will not fail after 49 seconds, but will succeed.
If 3.7 is -s and 3.11 is -c , attempting to run an indefinite test -t 0 will succeed.
So the trouble is running post-3.1.3 as server and 3.1.3 as client. Different versions can connect to each other, which is probably desirable behavior, indefinite test behavior differs somehow, and we really really really really badly need updated public binaries.
This may be irrelevant to OPs case, but this message sometimes shown instead of «connection refused» when, for example, destination port blocked by a firewall or iperf3 server even not runned. I saw it when tried to connect from iperf 3.7 (latest Ubuntu LTS) to iperf 3.7 (previous Ubuntu LTS).
. this message sometimes shown instead of «connection refused» when, for example, destination port blocked by a firewall or iperf3 server even not runned. I saw it when tried to connect from iperf 3.7 .
This specific issue was probably fixed by PR #1132. The fix is available only starting from version 3.10.
The only readily available version of iperf3 for Windows is 3.1.3
Newer iperf3 version for Windows is available here (although not an official iperf3 site — maintained by BudMan).
For me a different error message would already have helped. Maybe something along the lines like davidhan1120 commented on Dec 3, 2021, e.g.
the port is not available. Check your firewall settings or the port that you are trying to test.
If you really want to be beginner friendly, maybe also write that the server might not be running on the given host.
In my man page «control message» does not even occur once.
Источник
«Bad file descriptor» 5 seconds after specified connection duration #753
Version of iperf3:
iperf 3.5 (cJSON 1.5.2)
Hardware:
VirtualBox VM
Operating system (and distribution, if any):
Ubuntu 18.04
Bug Report
The problem occurs when a connection lasts more than 5 seconds after the specified duration (when the data keeps arriving to the receiver even though the sender has stopped emitting). This can be the result of large queues in network equipment together with low bandwidth.
The bug is related to issues #645 #648 #653
Expected Behavior
There could be two expected behaviors:
- either the connection is stopped by the receiver when the specified duration is exceeded
- or the connection continues while data is coming (there could be a timeout to cut the connection if no data has been received in the last 5s)
Actual Behavior
As soon as the connection length of the client reaches the duration specified by the sender plus 5 second, the message «iperf3: error — select failed: Bad file descriptor» is displayed.
In the example below, the link capacity is 5 Mb/s and the client is sending UDP data at 10 Mb/s
Connecting to host 192.168.2.3, port 5201
[ 6] local 192.168.0.4 port 41567 connected to 192.168.2.3 port 5201
[ ID] Interval Transfer Bitrate Total Datagrams
[ 6] 0.00-1.00 sec 1.19 MBytes 9.99 Mbits/sec 863
[ 6] 1.00-2.00 sec 1.19 MBytes 10.0 Mbits/sec 864
[ 6] 2.00-3.00 sec 1.19 MBytes 10.0 Mbits/sec 863
[ 6] 3.00-4.00 sec 1.19 MBytes 10.0 Mbits/sec 863
[ 6] 4.00-5.00 sec 1.19 MBytes 10.0 Mbits/sec 863
[ 6] 5.00-6.00 sec 1.19 MBytes 10.0 Mbits/sec 864
[ 6] 6.00-7.00 sec 1.19 MBytes 10.0 Mbits/sec 863
[ 6] 7.00-8.00 sec 1.19 MBytes 10.0 Mbits/sec 863
[ 6] 8.00-9.00 sec 1.19 MBytes 10.0 Mbits/sec 863
iperf3: error — control socket has closed unexpectedly
Server listening on 5201
Accepted connection from 192.168.0.4, port 42186
[ 6] local 192.168.2.3 port 5201 connected to 192.168.0.4 port 41567
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 6] 0.00-1.00 sec 252 KBytes 2.06 Mbits/sec 2.033 ms 0/178 (0%)
[ 6] 1.00-2.00 sec 625 KBytes 5.12 Mbits/sec 1.898 ms 0/442 (0%)
[ 6] 2.00-3.00 sec 617 KBytes 5.05 Mbits/sec 1.833 ms 0/436 (0%)
[ 6] 3.00-4.00 sec 617 KBytes 5.05 Mbits/sec 1.818 ms 0/436 (0%)
[ 6] 4.00-5.00 sec 615 KBytes 5.04 Mbits/sec 2.090 ms 0/435 (0%)
[ 6] 5.00-6.00 sec 617 KBytes 5.05 Mbits/sec 1.703 ms 0/436 (0%)
[ 6] 6.00-7.00 sec 622 KBytes 5.10 Mbits/sec 2.342 ms 0/440 (0%)
[ 6] 7.00-8.00 sec 624 KBytes 5.11 Mbits/sec 2.083 ms 0/441 (0%)
[ 6] 8.00-9.00 sec 530 KBytes 4.34 Mbits/sec 4.119 ms 0/375 (0%)
[ 6] 9.00-10.00 sec 372 KBytes 3.05 Mbits/sec 1.938 ms 0/263 (0%)
[ 6] 10.00-11.00 sec 615 KBytes 5.04 Mbits/sec 2.128 ms 0/435 (0%)
[ 6] 11.00-12.00 sec 628 KBytes 5.14 Mbits/sec 2.037 ms 0/444 (0%)
[ 6] 12.00-13.00 sec 617 KBytes 5.05 Mbits/sec 2.243 ms 0/436 (0%)
[ 6] 13.00-14.00 sec 619 KBytes 5.08 Mbits/sec 2.095 ms 0/438 (0%)
iperf3: error — select failed: Bad file descriptor
- Steps to Reproduce
The bug is a bit tricky to reproduce: you need to have a link which buffers packets in a queue (I believe many routers do that), as well as a link with a low capacity (so that all the packets stored in the queue cannot be transfered withing 5 seconds on the link).
Then, just send data in UDP with a bitrate higher than the link capacity. This way the last packet sent by the sender will arrive more than 5 seconds after at the receiver and that will cause the bug.
The text was updated successfully, but these errors were encountered:
Источник
Regression in 3.1/3.2: iperf server fails after
15 seconds with «Bad file descriptor» #645
Regression in 3.1/3.2: iperf server fails after
Version of iperf3: 3.2, 3.2rc1 and latest commit on master (cd5d89d).
Hardware: MBP (Arch Linux, 2015), MBP (MacOS, 2017), VPS (Arch Linux).
Operating system (and distribution, if any): Arch Linux, MacOS.
Bug Report
Expected behaviour: setting -t 0 , I expect iperf to run indefinitely.
Actual behaviour: the connection is dropped after about 15 seconds.
Steps to reproduce: run iperf 3.2 as a server with iperf3 -s and a client (any version) with iperf3 -c $HOST -t 0 (can be entirely local, with HOST=localhost ); after
15 seconds, the client will report
and the server will report
(this can also be reproduced in reverse mode).
The text was updated successfully, but these errors were encountered:
This is a weird problem on a couple levels. I see it on the tip of the master and 3.1-STABLE branches, but not the tip of 3.0-STABLE (tested variously on macOS Sierra and CentOS 7). The test shouldn’t stop like that, so there’s definitely a bug there.
I’m not sure what the correct behavior is, on the other hand. Your «run indefinitely» makes some sense, but on the other hand it’d also make sense to not allow this case at all, so that a client can’t conduct a DoS attack against a server (a counter-argument is that a server should be able to specify a maximum test length).
I’m going to dig a little more into this and try to understand what’s going on. So far I have the following interesting result:
Server (iperf-master), client (iperf-master): Failure
Server (iperf-3.0-stable), client (iperf-master): OK
Server (iperf-master), client (iperf-3.0-stable): Failure
Server (iperf-3.0-stable), client (iperf-3.0-stable): OK
This so far implies that there’s a problem on the server side with newer iperf.
BTW thanks for a well-formulated bug report!
Источник
Iperf3 error unable to send control message bad file descriptor
I am facing the same issue.
What operating system are you using?
Are you immediately retrying the test, or is there some kind of sleep?
Can you try with a more recent version of iperf3; the latest is 3.10.1.
Let me try to get the latest.
The default iperf server port is 5201 which must be open on the server’s firewall.
The server is working fine when tested from another machine.
Created at 9 months ago
Created at 9 months ago
Server computer is Ubuntu 20.04.4 LTS, x86-64
Client computer is Windows 10 Enterprise 2016 LTSB, v1607
iperf3 fails after exactly 49 seconds, every time, with
iperf3: error — select failed: Bad file descriptor
Running iperf3 3.11 on the Ubuntu machine, built from the tag here on github.
Running the other way around, with the server on windows and the client on ubuntu, no error is experienced, the test can run indefinitely.
Created at 9 months ago
Does Windows 10 do any outbound traffic blocking? I’d assume you needed to add a firewall exception to allow the inbound traffic. Not sure that this is the answer, but does the same issue occur with the Windows firewall disabled (or defender or whatever)?
Created at 9 months ago
Does Windows 10 do any outbound traffic blocking? I’d assume you needed to add a firewall exception to allow the inbound traffic. Not sure that this is the answer, but does the same issue occur with the Windows firewall disabled (or defender or whatever)?
No, running the same test w10e2016 -> w10e2016 doesn’t have the same issue.
I believe it is a behavior regression post-3.1.3, because all the windows computers are running 3.1.3, because I haven’t gotten around to figuring out how to build iperf3 for windows from source.
To clarify, the test runs for 49 seconds. It works perfectly for 49 seconds, and then terminates unexpectedly.
The command used on the ubuntu machine (3.11) is
iperf3 -s
The command used on the windows machine (3.1.3) is
iperf3 -c testmachine1 -b 64m -t 0
However, if the following commands are used instead,
iperf3 -s
iperf3 -c testmachine1 -b 64m -t 80
Then the test runs for exactly 80 seconds as expected.
Again, running the other way around,
ubuntu: iperf3 -c testmachine2 -b 64m -t 0
windows: iperf3 -s
The test runs indefinitely as expected.
I haven’t tried ubuntu-ubuntu yet, working on it now.
Created at 9 months ago
Critical update!
Connecting 3.11 to 3.11 works just fine.
It is a non-trivial versioning issue;
- The only readily available version of iperf3 for Windows is 3.1.3
- The most readily available version of iperf3 for Ubuntu is 3.7 (though 3.11 also exhibits this behavior)
If 3.7 or 3.11 is -s and 3.1.3 is -c , attempting to run an indefinite test -t 0 will fail after 49 seconds.
If 3.7 or 3.11 is -c and 3.1.3 is -s , attempting to run an indefinite test -t 0 will succeed.
If 3.7 or 3.11 is -s and 3.1.3 is -c , attempting to run a limited test -t 80 will not fail after 49 seconds, but will succeed.
If 3.7 is -s and 3.11 is -c , attempting to run an indefinite test -t 0 will succeed.
So the trouble is running post-3.1.3 as server and 3.1.3 as client. Different versions can connect to each other, which is probably desirable behavior, indefinite test behavior differs somehow, and we really really really really badly need updated public binaries.
Created at 9 months ago
This may be irrelevant to OPs case, but this message sometimes shown instead of «connection refused» when, for example, destination port blocked by a firewall or iperf3 server even not runned. I saw it when tried to connect from iperf 3.7 (latest Ubuntu LTS) to iperf 3.7 (previous Ubuntu LTS).
Created at 7 months ago
. this message sometimes shown instead of «connection refused» when, for example, destination port blocked by a firewall or iperf3 server even not runned. I saw it when tried to connect from iperf 3.7 .
This specific issue was probably fixed by PR #1132. The fix is available only starting from version 3.10.
Created at 7 months ago
The only readily available version of iperf3 for Windows is 3.1.3
Newer iperf3 version for Windows is available here (although not an official iperf3 site — maintained by BudMan).
Created at 7 months ago
For me a different error message would already have helped. Maybe something along the lines like davidhan1120 commented on Dec 3, 2021, e.g.
the port is not available. Check your firewall settings or the port that you are trying to test.
If you really want to be beginner friendly, maybe also write that the server might not be running on the given host.
In my man page «control message» does not even occur once.
Источник
-
#1
I have configured LACP bonds between 2x 10Gig ports and connected them to TrueNAS VM. Now i want to verify the combined 20Gb link between Proxmax and TrueNAS using IPerf. Can anybody help, How to do that?
Iperf is running on TrueNAS, i want to install/configure it on Proxmox Server so its link can be verfied
Last edited: Nov 15, 2021
aaron
Proxmox Staff Member
-
#2
Start iperf in server mode on one machine and in client mode on the other. For example (iperf3 as I have that installed):
On the server machine
iperf3 -s -B 192.168.0.5 -p 5002
On the client:
iperf3 -c 192.168.0.5 -p 5002
For more details on the parameters check the man pages, e.g. man iperf
or man iperf3
-
#3
Start iperf in server mode on one machine and in client mode on the other. For example (iperf3 as I have that installed):
On the server machine
iperf3 -s -B 192.168.0.5 -p 5002
On the client:
iperf3 -c 192.168.0.5 -p 5002
For more details on the parameters check the man pages, e.g.
man iperf
orman iperf3
iperf3 -s -B 192.168.0.200 -p 5002
Yes i am running the same command in proxmox server shell but i get the following reply
-bash: iperf3: command not found
aaron
Proxmox Staff Member
-
#4
Well, then you need to install it
apt install iperf3
should do the trick. Proxmox VE is based on Debian.
-
#5
Well, then you need to install it
apt install iperf3
should do the trick. Proxmox VE is based on Debian.
getting this reply after installing and running the previous command
iperf3: error — unable to start listener for connections: Cannot assign requested address
iperf3: exiting
-
#6
iperf3: error — unable to send control message: Bad file descriptor
-
#7
TrueNAS has an ip address 192.168.0.10 while Proxmox Server has ip 192.168.0.50:8006
Will you please now tell me the complete commands to run on TrueNAS and Proxmox server, to check the bandwidth of conection between them using IPerf.
-
#8
Received following reply after running the following commands:
iperf3 -s on server
iperf3 -c 192.168.0.50 0n TrueNAS.
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 178 KBytes 1.46 Mbits/sec
[ 5] 1.00-2.00 sec 253 KBytes 2.07 Mbits/sec
[ 5] 2.00-3.00 sec 291 KBytes 2.39 Mbits/sec
[ 5] 3.00-4.00 sec 345 KBytes 2.83 Mbits/sec
[ 5] 4.00-5.00 sec 113 KBytes 926 Kbits/sec
[ 5] 5.00-6.00 sec 242 KBytes 1.98 Mbits/sec
[ 5] 6.00-7.00 sec 298 KBytes 2.44 Mbits/sec
[ 5] 7.00-8.00 sec 414 KBytes 3.39 Mbits/sec
[ 5] 8.00-9.00 sec 396 KBytes 3.24 Mbits/sec
[ 5] 9.00-10.00 sec 421 KBytes 3.45 Mbits/sec
[ 5] 10.00-10.24 sec 0.00 Bytes 0.00 bits/sec
— — — — — — — — — — — — — — — — — — — — — — — — —
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.24 sec 2.88 MBytes 2.36 Mbits/sec receiver
aaron
Proxmox Staff Member
-
#9
Doesn’t look too good? How is your network setup? Does the PVE and TrueNAS server each have multiple interfaces? How is the network infrastructure in between?
iperf3: error — unable to send control message: Bad file descriptor
ISSUE: iperf3: error — unable to send control message: Bad file descriptor
Hello Team,
I’m running network testing on my two systems using the iperf3 command, where the testing is getting successful in some time and it’s failing in some time with the error message «iperf3: error — unable to send control message: Bad file descriptor». I assume the connectivity is the problem so to test the connectivity between both systems, I am running a ping test which is passed. In my next step, I am doing a retry in case of the first attempt failed in the iperf3 test. in my all attempt the results were the same(frequent failure with the error message).
Iperf Version: iperf 3.9 (cJSON 1.7.13)
Command used: iperf3 —client 10.0.0.4 —json
Help me to identify the issue and fix them, Thanks in advance.
Hi, we might need a bit more information to figure out your issue.
- What operating system are you using?
- Are you immediately retrying the test or is there some kind of sleep?
- Can you try with a more recent version of iperf3; the latest is 3.10.1.
Hi @swlars,
Thank you for your response. please find my inline answers.
- What operating system are you using?
ANS: Its Ubuntu 18.04. - Are you immediately retrying the test or is there some kind of sleep?
ANS: As of now I’m giving some sleep time but again it depends on the time the server starts up(which we don’t check as of now). - Can you try with a more recent version of iperf3; the latest is 3.10.1.
ANS: let me try and give you update.
As per your 2nd question there can be a chance the client start reaching to server as soon as the server starts up, let me try with some check when the server starts and from there ill give some sleep time. But want to know what is the expected sleep time we want to give so that client can reach to server after the server starts up.
Thanks,
Bharath K
That message shows up when the port is not available. Check your firewall settings or the port that you are trying to test.
I’m not mentioning any port along with command also I don’t have any firewall to block, as this is in my own system configured as private network.
If you’re not mentioning any port with the command, by default it is going to need to use port 5001. Assuming that you are using this on your private network, you need to enable port forwarding on your home router.
A ping test only tests that the host is alive and uses an ICMP protocol, what you need to test is if a port is blocked.
The default iperf server port is 5201 which must be open on the server’s firewall.
I am facing the same issue.
What operating system are you using?
PRETTY_NAME="Raspbian GNU/Linux 11 (bullseye)"
NAME="Raspbian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
Are you immediately retrying the test, or is there some kind of sleep?
Nope.
Can you try with a more recent version of iperf3; the latest is 3.10.1.
$ iperf3 --version
iperf 3.9 (cJSON 1.7.13)
Let me try to get the latest.
The default iperf server port is 5201 which must be open on the server’s firewall.
The server is working fine when tested from another machine.
me too:
$ iperf3 -c 10.253.38.37 iperf3: error - unable to send control message: Bad file descriptor
Server computer is Ubuntu 20.04.4 LTS, x86-64
Client computer is Windows 10 Enterprise 2016 LTSB, v1607
iperf3 fails after exactly 49 seconds, every time, with
iperf3: error - select failed: Bad file descriptor
Running iperf3 3.11 on the Ubuntu machine, built from the tag here on github.
Running the other way around, with the server on windows and the client on ubuntu, no error is experienced, the test can run indefinitely.
Does Windows 10 do any outbound traffic blocking? I’d assume you needed to add a firewall exception to allow the inbound traffic. Not sure that this is the answer, but does the same issue occur with the Windows firewall disabled (or defender or whatever)?
Does Windows 10 do any outbound traffic blocking? I’d assume you needed to add a firewall exception to allow the inbound traffic. Not sure that this is the answer, but does the same issue occur with the Windows firewall disabled (or defender or whatever)?
No, running the same test w10e2016 -> w10e2016 doesn’t have the same issue.
I believe it is a behavior regression post-3.1.3, because all the windows computers are running 3.1.3, because I haven’t gotten around to figuring out how to build iperf3 for windows from source.
To clarify, the test runs for 49 seconds. It works perfectly for 49 seconds, and then terminates unexpectedly.
The command used on the ubuntu machine (3.11) is
iperf3 -s
The command used on the windows machine (3.1.3) is
iperf3 -c testmachine1 -b 64m -t 0
However, if the following commands are used instead,
iperf3 -s
iperf3 -c testmachine1 -b 64m -t 80
Then the test runs for exactly 80 seconds as expected.
Again, running the other way around,
ubuntu: iperf3 -c testmachine2 -b 64m -t 0
windows: iperf3 -s
The test runs indefinitely as expected.
I haven’t tried ubuntu-ubuntu yet, working on it now.
Critical update!
Connecting 3.11 to 3.11 works just fine.
It is a non-trivial versioning issue;
- The only readily available version of iperf3 for Windows is 3.1.3
- The most readily available version of iperf3 for Ubuntu is 3.7 (though 3.11 also exhibits this behavior)
If 3.7 or 3.11 is -s
and 3.1.3 is -c
, attempting to run an indefinite test -t 0
will fail after 49 seconds.
If 3.7 or 3.11 is -c
and 3.1.3 is -s
, attempting to run an indefinite test -t 0
will succeed.
If 3.7 or 3.11 is -s
and 3.1.3 is -c
, attempting to run a limited test -t 80
will not fail after 49 seconds, but will succeed.
If 3.7 is -s
and 3.11 is -c
, attempting to run an indefinite test -t 0
will succeed.
So the trouble is running post-3.1.3 as server and 3.1.3 as client. Different versions can connect to each other, which is probably desirable behavior, indefinite test behavior differs somehow, and we really really really really badly need updated public binaries.
This may be irrelevant to OPs case, but this message sometimes shown instead of «connection refused» when, for example, destination port blocked by a firewall or iperf3 server even not runned. I saw it when tried to connect from iperf 3.7 (latest Ubuntu LTS) to iperf 3.7 (previous Ubuntu LTS).
…. this message sometimes shown instead of «connection refused» when, for example, destination port blocked by a firewall or iperf3 server even not runned. I saw it when tried to connect from iperf 3.7 ….
This specific issue was probably fixed by PR #1132. The fix is available only starting from version 3.10.
@embermctillhawk, regarding:
The only readily available version of iperf3 for Windows is 3.1.3
Newer iperf3 version for Windows is available here (although not an official iperf3 site — maintained by BudMan).
For me a different error message would already have helped. Maybe something along the lines like davidhan1120 commented on Dec 3, 2021, e.g.
the port is not available. Check your firewall settings or the port that you are trying to test.
If you really want to be beginner friendly, maybe also write that the server might not be running on the given host.
In my man page «control message» does not even occur once.
root@truenas[/mnt/MAINpool/PhotoLibrary]# fio —name=seq-read —rw=read —ioengine=posixaio
—blocksize=128k —iodepth=64 —runtime=300 —size=4g ad —rw=read —ioengine=posixaio
—direct=1 —invalidate=1 —time_based —numjobs=»$(nproc)»
seq-read: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=64
…
fio-3.25
Starting 12 processes
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
Jobs: 12 (f=12): [R(12)][100.0%][r=232MiB/s][r=1856 IOPS][eta 00m:00s]
seq-read: (groupid=0, jobs=1): err= 0: pid=1521691: Sun Jan 8 18:13:41 2023
read: IOPS=146, BW=18.3MiB/s (19.2MB/s)(5496MiB/300169msec)
slat (nsec): min=72, max=128869, avg=193.06, stdev=689.02
clat (usec): min=1700, max=3043.3k, avg=436911.43, stdev=541126.47
lat (usec): min=1700, max=3043.3k, avg=436911.62, stdev=541126.48
clat percentiles (usec):
| 1.00th=[ 1745], 5.00th=[ 1827], 10.00th=[ 1909],
| 20.00th=[ 10028], 30.00th=[ 63701], 40.00th=[ 131597],
| 50.00th=[ 227541], 60.00th=[ 367002], 70.00th=[ 534774],
| 80.00th=[ 759170], 90.00th=[1199571], 95.00th=[1635779],
| 99.00th=[2298479], 99.50th=[2466251], 99.90th=[3036677],
| 99.95th=[3036677], 99.99th=[3036677]
bw ( KiB/s): min=16384, max=147456, per=13.03%, avg=33253.12, stdev=24857.12, samples=338
iops : min= 128, max= 1152, avg=259.79, stdev=194.20, samples=338
lat (msec) : 2=13.80%, 4=5.70%, 10=0.46%, 20=2.60%, 50=5.68%
lat (msec) : 100=6.70%, 250=16.89%, 500=16.59%, 750=11.50%, 1000=5.97%
lat (msec) : 2000=11.64%, >=2000=2.47%
cpu : usr=0.01%, sys=0.00%, ctx=2722, majf=6, minf=52
IO depths : 1=0.1%, 2=0.1%, 4=1.1%, 8=12.5%, 16=25.0%, 32=52.9%, >=64=8.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=43968,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521692: Sun Jan 8 18:13:41 2023
read: IOPS=142, BW=17.8MiB/s (18.6MB/s)(5336MiB/300359msec)
slat (nsec): min=71, max=104374, avg=186.50, stdev=574.50
clat (usec): min=1364, max=3257.8k, avg=450296.80, stdev=512479.37
lat (usec): min=1364, max=3257.8k, avg=450296.98, stdev=512479.38
clat percentiles (usec):
| 1.00th=[ 1745], 5.00th=[ 1844], 10.00th=[ 1926],
| 20.00th=[ 48497], 30.00th=[ 109577], 40.00th=[ 177210],
| 50.00th=[ 250610], 60.00th=[ 383779], 70.00th=[ 566232],
| 80.00th=[ 817890], 90.00th=[1149240], 95.00th=[1501561],
| 99.00th=[2197816], 99.50th=[2734687], 99.90th=[3271558],
| 99.95th=[3271558], 99.99th=[3271558]
bw ( KiB/s): min=16384, max=212992, per=11.95%, avg=30480.09, stdev=22827.64, samples=358
iops : min= 128, max= 1664, avg=238.12, stdev=178.34, samples=358
lat (msec) : 2=11.25%, 4=2.39%, 10=1.50%, 20=2.10%, 50=2.85%
lat (msec) : 100=7.95%, 250=21.44%, 500=18.14%, 750=9.90%, 1000=8.85%
lat (msec) : 2000=12.14%, >=2000=1.50%
cpu : usr=0.01%, sys=0.00%, ctx=2559, majf=2, minf=50
IO depths : 1=0.1%, 2=0.1%, 4=1.2%, 8=12.5%, 16=25.0%, 32=53.0%, >=64=8.3%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=42688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521693: Sun Jan 8 18:13:41 2023
read: IOPS=108, BW=13.6MiB/s (14.3MB/s)(4088MiB/300173msec)
slat (nsec): min=73, max=107438, avg=181.44, stdev=720.59
clat (usec): min=1322, max=3953.3k, avg=587406.22, stdev=672173.48
lat (usec): min=1322, max=3953.3k, avg=587406.40, stdev=672173.48
clat percentiles (usec):
| 1.00th=[ 1680], 5.00th=[ 1827], 10.00th=[ 1942],
| 20.00th=[ 44303], 30.00th=[ 124257], 40.00th=[ 244319],
| 50.00th=[ 367002], 60.00th=[ 530580], 70.00th=[ 742392],
| 80.00th=[ 994051], 90.00th=[1501561], 95.00th=[2055209],
| 99.00th=[2902459], 99.50th=[3405775], 99.90th=[3942646],
| 99.95th=[3942646], 99.99th=[3942646]
bw ( KiB/s): min=16384, max=163840, per=10.56%, avg=26955.05, stdev=21192.15, samples=310
iops : min= 128, max= 1280, avg=210.58, stdev=165.56, samples=310
lat (msec) : 2=11.12%, 4=3.56%, 10=0.98%, 20=1.17%, 50=4.50%
lat (msec) : 100=6.07%, 250=12.72%, 500=19.18%, 750=10.96%, 1000=9.98%
lat (msec) : 2000=14.68%, >=2000=5.09%
cpu : usr=0.01%, sys=0.00%, ctx=2188, majf=1, minf=54
IO depths : 1=0.1%, 2=0.1%, 4=0.9%, 8=12.5%, 16=25.0%, 32=52.7%, >=64=8.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.6%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=32704,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521694: Sun Jan 8 18:13:41 2023
read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(6698MiB/300405msec)
slat (nsec): min=72, max=517443, avg=216.98, stdev=2328.93
clat (usec): min=442, max=2181.9k, avg=358773.06, stdev=384287.00
lat (usec): min=442, max=2181.9k, avg=358773.28, stdev=384287.00
clat percentiles (usec):
| 1.00th=[ 1745], 5.00th=[ 1827], 10.00th=[ 1909],
| 20.00th=[ 16188], 30.00th=[ 90702], 40.00th=[ 168821],
| 50.00th=[ 254804], 60.00th=[ 346031], 70.00th=[ 467665],
| 80.00th=[ 599786], 90.00th=[ 893387], 95.00th=[1132463],
| 99.00th=[1719665], 99.50th=[1887437], 99.90th=[2197816],
| 99.95th=[2197816], 99.99th=[2197816]
bw ( KiB/s): min= 1792, max=163840, per=11.57%, avg=29529.71, stdev=26422.34, samples=464
iops : min= 14, max= 1280, avg=230.70, stdev=206.42, samples=464
lat (usec) : 500=0.01%
lat (msec) : 2=13.15%, 4=5.15%, 10=0.81%, 20=1.42%, 50=4.24%
lat (msec) : 100=6.74%, 250=18.02%, 500=22.93%, 750=13.85%, 1000=5.85%
lat (msec) : 2000=7.35%, >=2000=0.48%
cpu : usr=0.02%, sys=0.00%, ctx=3264, majf=2, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=6.7%, 16=22.6%, 32=61.4%, >=64=8.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.2%, 8=1.3%, 16=0.1%, 32=0.1%, 64=1.4%, >=64=0.0%
issued rwts: total=53584,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521695: Sun Jan 8 18:13:41 2023
read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(7776MiB/300147msec)
slat (nsec): min=72, max=106747, avg=195.29, stdev=509.55
clat (usec): min=1700, max=1933.4k, avg=308777.12, stdev=314491.13
lat (usec): min=1700, max=1933.4k, avg=308777.31, stdev=314491.13
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1926], 10.00th=[ 5211],
| 20.00th=[ 56361], 30.00th=[ 102237], 40.00th=[ 158335],
| 50.00th=[ 212861], 60.00th=[ 274727], 70.00th=[ 367002],
| 80.00th=[ 522191], 90.00th=[ 750781], 95.00th=[ 977273],
| 99.00th=[1367344], 99.50th=[1568670], 99.90th=[1937769],
| 99.95th=[1937769], 99.99th=[1937769]
bw ( KiB/s): min=16384, max=212992, per=13.10%, avg=33423.36, stdev=24732.34, samples=476
iops : min= 128, max= 1664, avg=261.11, stdev=193.22, samples=476
lat (msec) : 2=6.83%, 4=3.15%, 10=1.44%, 20=1.34%, 50=5.97%
lat (msec) : 100=10.60%, 250=27.06%, 500=22.84%, 750=10.91%, 1000=5.35%
lat (msec) : 2000=4.53%
cpu : usr=0.02%, sys=0.00%, ctx=3676, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=1.2%, 8=12.5%, 16=25.0%, 32=53.1%, >=64=8.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=62208,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521696: Sun Jan 8 18:13:41 2023
read: IOPS=185, BW=23.2MiB/s (24.3MB/s)(6960MiB/300336msec)
slat (nsec): min=75, max=190210, avg=218.30, stdev=881.38
clat (usec): min=689, max=2831.4k, avg=345160.08, stdev=427359.08
lat (usec): min=690, max=2831.4k, avg=345160.30, stdev=427359.09
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1844], 10.00th=[ 1926],
| 20.00th=[ 27919], 30.00th=[ 77071], 40.00th=[ 127402],
| 50.00th=[ 187696], 60.00th=[ 274727], 70.00th=[ 387974],
| 80.00th=[ 574620], 90.00th=[ 910164], 95.00th=[1216349],
| 99.00th=[1988101], 99.50th=[2298479], 99.90th=[2701132],
| 99.95th=[2801796], 99.99th=[2835350]
bw ( KiB/s): min= 5632, max=163840, per=12.46%, avg=31781.08, stdev=26717.60, samples=448
iops : min= 44, max= 1280, avg=248.29, stdev=208.73, samples=448
lat (usec) : 750=0.01%
lat (msec) : 2=12.15%, 4=3.10%, 10=1.82%, 20=1.57%, 50=5.87%
lat (msec) : 100=10.67%, 250=22.57%, 500=18.05%, 750=9.60%, 1000=6.30%
lat (msec) : 2000=7.34%, >=2000=0.94%
cpu : usr=0.02%, sys=0.00%, ctx=3211, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=4.7%, 16=14.8%, 32=71.5%, >=64=8.6%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.6%, 8=0.1%, 16=0.0%, 32=0.9%, 64=1.4%, >=64=0.0%
issued rwts: total=55680,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521697: Sun Jan 8 18:13:41 2023
read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(3968MiB/300070msec)
slat (nsec): min=71, max=205667, avg=192.86, stdev=1179.69
clat (usec): min=1716, max=3657.3k, avg=604962.11, stdev=715788.06
lat (usec): min=1716, max=3657.3k, avg=604962.30, stdev=715788.07
clat percentiles (usec):
| 1.00th=[ 1795], 5.00th=[ 1876], 10.00th=[ 1958],
| 20.00th=[ 41681], 30.00th=[ 124257], 40.00th=[ 212861],
| 50.00th=[ 346031], 60.00th=[ 471860], 70.00th=[ 734004],
| 80.00th=[1069548], 90.00th=[1585447], 95.00th=[2298479],
| 99.00th=[3238003], 99.50th=[3338666], 99.90th=[3640656],
| 99.95th=[3640656], 99.99th=[3640656]
bw ( KiB/s): min=16384, max=196608, per=11.04%, avg=28160.22, stdev=23851.02, samples=288
iops : min= 128, max= 1536, avg=220.00, stdev=186.34, samples=288
lat (msec) : 2=11.03%, 4=3.49%, 10=0.20%, 20=0.60%, 50=5.65%
lat (msec) : 100=6.05%, 250=15.52%, 500=18.15%, 750=10.69%, 1000=7.46%
lat (msec) : 2000=14.72%, >=2000=6.45%
cpu : usr=0.01%, sys=0.00%, ctx=1984, majf=8, minf=46
IO depths : 1=0.1%, 2=0.1%, 4=1.0%, 8=12.5%, 16=25.0%, 32=53.0%, >=64=8.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=31744,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521698: Sun Jan 8 18:13:41 2023
read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(6456MiB/300515msec)
slat (nsec): min=67, max=110741, avg=179.93, stdev=560.99
clat (usec): min=1745, max=2368.1k, avg=372370.14, stdev=353608.69
lat (usec): min=1745, max=2368.1k, avg=372370.32, stdev=353608.69
clat percentiles (usec):
| 1.00th=[ 1795], 5.00th=[ 1958], 10.00th=[ 27657],
| 20.00th=[ 91751], 30.00th=[ 152044], 40.00th=[ 208667],
| 50.00th=[ 270533], 60.00th=[ 346031], 70.00th=[ 446694],
| 80.00th=[ 608175], 90.00th=[ 851444], 95.00th=[1098908],
| 99.00th=[1551893], 99.50th=[1769997], 99.90th=[2365588],
| 99.95th=[2365588], 99.99th=[2365588]
bw ( KiB/s): min=16384, max=163840, per=11.74%, avg=29945.49, stdev=20620.43, samples=441
iops : min= 128, max= 1280, avg=233.94, stdev=161.10, samples=441
lat (msec) : 2=5.66%, 4=1.53%, 10=0.74%, 20=0.87%, 50=4.83%
lat (msec) : 100=7.19%, 250=26.27%, 500=27.26%, 750=12.39%, 1000=6.32%
lat (msec) : 2000=6.57%, >=2000=0.37%
cpu : usr=0.01%, sys=0.00%, ctx=3227, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=1.2%, 8=12.5%, 16=25.0%, 32=52.8%, >=64=8.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=51648,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521700: Sun Jan 8 18:13:41 2023
read: IOPS=191, BW=23.9MiB/s (25.0MB/s)(7176MiB/300406msec)
slat (nsec): min=72, max=91945, avg=192.07, stdev=516.01
clat (usec): min=1345, max=2019.8k, avg=334885.23, stdev=327138.08
lat (usec): min=1346, max=2019.8k, avg=334885.42, stdev=327138.08
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1942], 10.00th=[ 18220],
| 20.00th=[ 73925], 30.00th=[ 125305], 40.00th=[ 177210],
| 50.00th=[ 233833], 60.00th=[ 304088], 70.00th=[ 392168],
| 80.00th=[ 557843], 90.00th=[ 775947], 95.00th=[1010828],
| 99.00th=[1468007], 99.50th=[1686111], 99.90th=[2021655],
| 99.95th=[2021655], 99.99th=[2021655]
bw ( KiB/s): min=16384, max=147456, per=12.40%, avg=31638.91, stdev=20429.08, samples=464
iops : min= 128, max= 1152, avg=247.18, stdev=159.60, samples=464
lat (msec) : 2=5.73%, 4=2.41%, 10=0.56%, 20=1.45%, 50=4.46%
lat (msec) : 100=10.37%, 250=26.98%, 500=23.97%, 750=13.15%, 1000=5.57%
lat (msec) : 2000=5.25%, >=2000=0.11%
cpu : usr=0.02%, sys=0.00%, ctx=3724, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=0.8%, 8=12.5%, 16=25.0%, 32=52.9%, >=64=8.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=57408,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521701: Sun Jan 8 18:13:41 2023
read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(7216MiB/300078msec)
slat (nsec): min=73, max=97429, avg=183.19, stdev=481.36
clat (usec): min=1721, max=2104.0k, avg=332665.27, stdev=299257.51
lat (usec): min=1721, max=2104.0k, avg=332665.45, stdev=299257.52
clat percentiles (usec):
| 1.00th=[ 1795], 5.00th=[ 3458], 10.00th=[ 49546],
| 20.00th=[ 109577], 30.00th=[ 147850], 40.00th=[ 198181],
| 50.00th=[ 246416], 60.00th=[ 295699], 70.00th=[ 383779],
| 80.00th=[ 517997], 90.00th=[ 759170], 95.00th=[ 968885],
| 99.00th=[1333789], 99.50th=[1417675], 99.90th=[2105541],
| 99.95th=[2105541], 99.99th=[2105541]
bw ( KiB/s): min=16384, max=98304, per=11.98%, avg=30563.92, stdev=16989.22, samples=483
iops : min= 128, max= 768, avg=238.77, stdev=132.73, samples=483
lat (msec) : 2=3.75%, 4=1.35%, 10=1.11%, 20=0.44%, 50=3.44%
lat (msec) : 100=8.54%, 250=31.71%, 500=28.27%, 750=10.87%, 1000=6.10%
lat (msec) : 2000=4.32%, >=2000=0.11%
cpu : usr=0.02%, sys=0.00%, ctx=3671, majf=2, minf=49
IO depths : 1=0.1%, 2=0.1%, 4=1.0%, 8=12.5%, 16=25.0%, 32=52.9%, >=64=8.6%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=57728,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521702: Sun Jan 8 18:13:41 2023
read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(7272MiB/300044msec)
slat (nsec): min=72, max=120810, avg=186.03, stdev=553.36
clat (usec): min=1359, max=2476.5k, avg=330065.12, stdev=319080.96
lat (usec): min=1359, max=2476.5k, avg=330065.30, stdev=319080.96
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 2040], 10.00th=[ 27132],
| 20.00th=[ 71828], 30.00th=[ 120062], 40.00th=[ 173016],
| 50.00th=[ 238027], 60.00th=[ 304088], 70.00th=[ 400557],
| 80.00th=[ 549454], 90.00th=[ 784335], 95.00th=[ 985662],
| 99.00th=[1317012], 99.50th=[1451230], 99.90th=[2466251],
| 99.95th=[2466251], 99.99th=[2466251]
bw ( KiB/s): min=16384, max=278528, per=12.22%, avg=31188.40, stdev=25593.84, samples=477
iops : min= 128, max= 2176, avg=243.66, stdev=199.95, samples=477
lat (msec) : 2=4.72%, 4=1.00%, 10=1.32%, 20=1.21%, 50=7.92%
lat (msec) : 100=10.45%, 250=25.85%, 500=23.98%, 750=11.99%, 1000=6.71%
lat (msec) : 2000=4.62%, >=2000=0.22%
cpu : usr=0.01%, sys=0.00%, ctx=3583, majf=4, minf=47
IO depths : 1=0.1%, 2=0.1%, 4=1.0%, 8=12.5%, 16=25.0%, 32=53.1%, >=64=8.4%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=58176,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521703: Sun Jan 8 18:13:41 2023
read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(6440MiB/300126msec)
slat (nsec): min=70, max=95766, avg=178.40, stdev=481.79
clat (usec): min=1519, max=2327.3k, avg=372811.39, stdev=381336.48
lat (usec): min=1519, max=2327.3k, avg=372811.56, stdev=381336.48
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1926], 10.00th=[ 39060],
| 20.00th=[ 88605], 30.00th=[ 131597], 40.00th=[ 189793],
| 50.00th=[ 250610], 60.00th=[ 325059], 70.00th=[ 434111],
| 80.00th=[ 591397], 90.00th=[ 910164], 95.00th=[1149240],
| 99.00th=[1769997], 99.50th=[1853883], 99.90th=[2332034],
| 99.95th=[2332034], 99.99th=[2332034]
bw ( KiB/s): min=16384, max=147456, per=11.73%, avg=29939.15, stdev=19327.55, samples=440
iops : min= 128, max= 1152, avg=233.89, stdev=150.99, samples=440
lat (msec) : 2=5.76%, 4=1.19%, 10=0.50%, 20=1.12%, 50=3.73%
lat (msec) : 100=11.18%, 250=26.46%, 500=25.22%, 750=11.43%, 1000=5.71%
lat (msec) : 2000=7.20%, >=2000=0.50%
cpu : usr=0.01%, sys=0.00%, ctx=3164, majf=2, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=1.3%, 8=12.5%, 16=25.0%, 32=52.8%, >=64=8.4%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=51520,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=249MiB/s (261MB/s), 13.2MiB/s-25.9MiB/s (13.9MB/s-27.2MB/s), io=73.1GiB (78.5GB), run=300044-300515msec