The post discusses most commonly occurring NFS issues in Linux and how to resolve them.
1. Error: “Server Not Responding”
The Network File System (NFS) client and server communicate using Remote Procedure Call (RPC) messages over the network. Both the host->client and client->host communication paths must be functional. Use common tools such as ping, traceroute or tracepath to verify that the client and server machines can reach each other. If not, examine the network interface card (NIC) settings using either ifconfig or ethtool to verify the IP settings.
The NFS file system also reports “server not responding” when a heavy server or network loads cause the RPC message responses to time out. Use the “timeo=N” mount option on the client to increase the timeout. Check “man mount” for more information.
2. Error: “No route to host”
The “no route to host” error can be reported when the client attempts to mount an NFS file system, even if the client can successfully ping the server:
# mount NFS-Server:/data /data_remote mount: mount to NFS server 'NFS-Server' failed: System Error: No route to host.
This can be caused by the RPC messages being filtered by either the host firewall, the client firewall, or a network switch. Verify if a firewall is active and if NFS traffic is allowed. Normally nfs is using port 2049. As a quick test one can switch the firewall off by:
on both the client and the server. Try mounting the NFS directory again. Do not forget to switch it back on and configure it correctly to allow NFS traffic/
3. Error: “mount clntudp_create: RPC: Port mapper failure – RPC: Unable to receive”
The Linux NFS implementation requires that both the NFS service and the portmapper (RPC) service be running on both the client and the server. Check it like this:
# rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper [portmap service is started.] 100000 2 udp 111 portmapper 100011 1 udp 881 rquotad 100011 2 udp 881 rquotad ...
# service portmap status portmap (pid 7428) is running... [portmap service is started.]
If not, start it with the commands give below.
# chkconfig portmap on # service portmap start
4. Error: “NFS Stale File Handle”
A program uses the open(2) system call to access an NFS file in the same way the application opens a local file. This system call returns a file descriptor, or “handle”, that the program subsequently uses in I/O commands to identify the file to be manipulated.
Unlike traditional Linux file systems that allow an application to access an open file even if the file has been deleted using unlink or rm, NFS does not support this feature. An NFS file is deleted immediately. Any program which attempts to do further I/O on the deleted file will receive the “NFS Stale File Handle” error. For example, if your current working directory is an NFS directory and is deleted, you will see this error at the next shell prompt.
To refresh the client’s state with that of the server you may forcely unmount the mount point:
# umount -f /mnt/mount_point
or kill the process, which references the mounted file system:
# fuser -k [mounted-filesystem]
5. Error: “Access Denied” or “Permission Denied”
Check the export permissions for the NFS file system. You can do this from the client:
# showmount -e server_name
or from server:
If you see unexpected export permissions, check the /etc/exports file on the server. Make sure there is no syntax error such as space between the permitted host and the permissions. There is a significant difference in the line:
and the line:
because the second exports /home read-write to all systems: not what was intended. Note that the line still has correct syntax, so NFS will not complain about it.
6. Error: “rpc mount export: RPC: Timed out”
Error message:
Unable to access file system at [NFS SERVER]: rpc mount export: RPC: Timed out
This is caused by DNS name resolution issue. NFS(RPC) needs reverse name resolution. If NFS server or client cannot resolve their name, this error occurs. In case gets the error message, check DNS configuration and /etc/hosts configuration.
This section describes the most common troubleshooting issues related to
NFS .
mount command on NFS client fails with “RPC Error: Program not registered”
Start portmap or rpcbind service on the NFS server.
This error is encountered when the server has not started correctly.
On most Linux distributions this is fixed by starting portmap:
$ /etc/init.d/portmap start
On some distributions where portmap has been replaced by rpcbind, the
following command is required:
$ /etc/init.d/rpcbind start
After starting portmap or rpcbind, gluster NFS server needs to be
restarted.
NFS server start-up fails with “Port is already in use” error in the log file.
Another Gluster NFS server is running on the same machine.
This error can arise in case there is already a Gluster NFS server
running on the same machine. This situation can be confirmed from the
log file, if the following error lines exist:
[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
To resolve this error one of the Gluster NFS servers will have to be
shutdown. At this time, Gluster NFS server does not support running
multiple NFS servers on the same machine.
mount command fails with “rpc.statd” related error message
If the mount command fails with the following error message:
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
For NFS clients to mount the NFS server, rpc.statd service must be
running on the clients. Start rpc.statd service by running the following command:
$ rpc.statd
mount command takes too long to finish.
Start rpcbind service on the NFS client
The problem is that the rpcbind or portmap service is not running on the
NFS client. The resolution for this is to start either of these services
by running the following command:
$ /etc/init.d/portmap start
On some distributions where portmap has been replaced by rpcbind, the
following command is required:
$ /etc/init.d/rpcbind start
NFS server glusterfsd starts but initialization fails with “nfsrpc- service: portmap registration of program failed” error message in the log.
NFS start-up can succeed but the initialization of the NFS service can
still fail preventing clients from accessing the mount points. Such a
situation can be confirmed from the following error messages in the log
file:
[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols
[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap
[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
-
Start portmap or rpcbind service on the NFS server
On most Linux distributions, portmap can be started using the
following command:$ /etc/init.d/portmap start
On some distributions where portmap has been replaced by rpcbind,
run the following command:$ /etc/init.d/rpcbind start
After starting portmap or rpcbind, gluster NFS server needs to be
restarted. -
Stop another NFS server running on the same machine
Such an error is also seen when there is another NFS server running
on the same machine but it is not the Gluster NFS server. On Linux
systems, this could be the kernel NFS server. Resolution involves
stopping the other NFS server or not running the Gluster NFS server
on the machine. Before stopping the kernel NFS server, ensure that
no critical service depends on access to that NFS server’s exports.On Linux, kernel NFS servers can be stopped by using either of the
following commands depending on the distribution in use:$ /etc/init.d/nfs-kernel-server stop
$ /etc/init.d/nfs stop
-
Restart Gluster NFS server
mount command fails with NFS server failed error.
mount command fails with following error
*mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).*
Perform one of the following to resolve this issue:
-
Disable name lookup requests from NFS server to a DNS server
The NFS server attempts to authenticate NFS clients by performing a
reverse DNS lookup to match hostnames in the volume file with the
client IP addresses. There can be a situation where the NFS server
either is not able to connect to the DNS server or the DNS server is
taking too long to responsd to DNS request. These delays can result
in delayed replies from the NFS server to the NFS client resulting
in the timeout error seen above.NFS server provides a work-around that disables DNS requests,
instead relying only on the client IP addresses for authentication.
The following option can be added for successful mounting in such
situations:option rpc-auth.addr.namelookup off
Note: Remember that disabling the NFS server forces authentication
of clients to use only IP addresses and if the authentication
rules in the volume file use hostnames, those authentication rules
will fail and disallow mounting for those clients.OR
-
NFS version used by the NFS client is other than version 3
Gluster NFS server supports version 3 of NFS protocol. In recent
Linux kernels, the default NFS version has been changed from 3 to 4.
It is possible that the client machine is unable to connect to the
Gluster NFS server because it is using version 4 messages which are
not understood by Gluster NFS server. The timeout can be resolved by
forcing the NFS client to use version 3. The vers option to
mount command is used for this purpose:$ mount -o vers=3
showmount fails with clnt_create: RPC: Unable to receive
Check your firewall setting to open ports 111 for portmap
requests/replies and Gluster NFS server requests/replies. Gluster NFS
server operates over the following port numbers: 38465, 38466, and
38467.
Application fails with «Invalid argument» or «Value too large for defined data type» error.
These two errors generally happen for 32-bit nfs clients or applications
that do not support 64-bit inode numbers or large files. Use the
following option from the CLI to make Gluster NFS return 32-bit inode
numbers instead: nfs.enable-ino32 <on|off>
Applications that will benefit are those that were either:
- built 32-bit and run on 32-bit machines such that they do not
support large files by default - built 32-bit on 64-bit systems
This option is disabled by default so NFS returns 64-bit inode numbers
by default.
Applications which can be rebuilt from source are recommended to rebuild
using the following flag with gcc:
-D_FILE_OFFSET_BITS=64
This article or section is out of date.
Reason: Not all sections are up-to-date with NFSv4 changes. (Discuss in Talk:NFS/Troubleshooting)
Dedicated article for common problems and solutions.
Server-side issues
exportfs: /etc/exports:2: syntax error: bad option list
Make sure to delete all space from the option list in /etc/exports
.
exportfs: requires fsid= for NFS export
As not all filesystems are stored on devices and not all filesystems have UUIDs (e.g. FUSE), it is sometimes necessary to explicitly tell NFS how to identify a filesystem. This is done with the fsid
option:
/etc/exports
/srv/nfs client(rw,sync,crossmnt,fsid=0) /srv/nfs/music client(rw,sync,fsid=10)
Group/GID permissions issues
If NFS shares mount fine, and are fully accessible to the owner, but not to group members; check the number of groups that user belongs to. NFS has a limit of 16 on the number of groups a user can belong to. If you have users with more than this, you need to enable the manage-gids
start-up flag on the NFS server:
/etc/nfs.conf
[mountd] manage-gids=y
«Permission denied» when trying to write files as root
- If you need to mount shares as root, and have full r/w access from the client, add the no_root_squash option to the export in
/etc/exports
:
/var/cache/pacman/pkg 192.168.1.0/24(rw,no_subtree_check,no_root_squash)
- You must also add no_root_squash to the first line in
/etc/exports
:
/ 192.168.1.0/24(rw,fsid=root,no_root_squash,no_subtree_check)
«RPC: Program not registered» when showmount -e command issued
Make sure that nfs-server.service
and rpcbind.service
are running on the server site, see systemd. If they are not, start and enable them.
Also make sure NFSv3 is enabled. showmount does not work with NFSv4-only servers.
UDP mounts not working
nfs-utils disabled serving NFS over UDP in version 2.2.1. Arch core updated to 2.3.1 on 21 Dec 2017 (skipping over 2.2.1.) If UDP stopped working then, add udp=y
under [nfsd]
in /etc/nfs.conf
. Then restart nfs-server.service
.
Timeout with big directories
Since nfs-utils version 1.0.x, every subdirectory is checked for permissions. This can lead to timeout on directories with a «large» number of subdirectories, even a few hundreds.
To disable this behaviour, add the option no_subtree_check
to /etc/exports
to the share directory.
Client-side issues
mount.nfs4: No such device
Make sure the nfsd
kernel module has been loaded.
mount.nfs4: Invalid argument
Enable and start nfs-client.target
and make sure the appropriate daemons (nfs-idmapd, rpc-gssd, etc) are running on the server.
mount.nfs4: Network is unreachable
Users making use of systemd-networkd or NetworkManager might notice NFS mounts are not mounted when booting.
Force the network to be completely configured by enabling systemd-networkd-wait-online.service
or NetworkManager-wait-online.service
. This may slow down the boot-process because fewer services run in parallel.
Tip: If the NFS server is only expecting IPV4 addresses, and you are using NetworkManager-wait-online.service
, set ipv4.may-fail=no
in your network profile to make sure that an IPV4 address is acquired before the NetworkManager-wait-online.service
is reached.
mount.nfs4: an incorrect mount option was specified
This can happen if using the sec=krb5
option without nfs-client.target
and/or rpc-gssd.service
running. Starting and enabling those services should resolve the issue.
Unable to connect from OS X clients
When trying to connect from an OS X client, you will see that everything is ok in the server logs, but OS X will refuse to mount your NFS share. You can do one of two things to fix this:
- On the NFS server, add the
insecure
option to the share in/etc/exports
and re-runexportfs -r
.
… OR …
- On the OS X client, add the
resvport
option to themount
command line. You can also setresvport
as a default client mount option in/etc/nfs.conf
:
/etc/nfs.conf
nfs.client.mount.options = resvport
Using the default client mount option should also affect mounting the share from Finder via «Connect to Server…».
Unreliable connection from OS X clients
OS X’s NFS client is optimized for OS X Servers and might present some issues with Linux servers. If you are experiencing slow performance, frequent disconnects and problems with international characters edit the default mount options by adding the line nfs.client.mount.options = intr,locallocks,nfc
to /etc/nfs.conf
on your Mac client. More information about the mount options can be found in the OS X mount_nfs man page.
Intermittent client freezes when copying large files
If you copy large files from your client machine to the NFS server, the transfer speed is very fast, but after some seconds the speed drops and your client machine intermittently locks up completely for some time until the transfer is finished.
Try adding sync
as a mount option on the client (e.g. in /etc/fstab
) to fix this problem.
mount.nfs: Operation not permitted
NFSv4
If you use Kerberos (sec=krb5*
), make sure the client and server clocks are correct. Using ntpd or systemd-timesyncd is recommended. Also, check that the canonical name for the server as resolved on the client (see Domain name resolution) matches the name in the server’s NFS principal.
NFSv3 and earlier
nfs-utils versions 1.2.1-2 or higher use NFSv4 by default, resulting in NFSv3 shares failing on upgrade. The problem can be solved by using either mount option 'vers=3'
or 'nfsvers=3'
on the command line:
# mount.nfs remote target directory -o ...,vers=3,... # mount.nfs remote target directory -o ...,nfsvers=3,...
or in /etc/fstab
:
remote target directory nfs ...,vers=3,... 0 0 remote target directory nfs ...,nfsvers=3,... 0 0
mount.nfs: Protocol not supported
This error occurs when you include the export root in the path of the NFS source.
For example:
# mount SERVER:/srv/nfs4/media /mnt mount.nfs4: Protocol not supported
Use the relative path instead:
# mount SERVER:/media /mnt
Permissions issues
If you find that you cannot set the permissions on files properly, make sure the user/user group are both on the client and server.
If all your files are owned by nobody
, and you are using NFSv4, on both the client and server, you should ensure that the nfs-idmapd.service
has been started.
On some systems detecting the domain from FQDN minus hostname does not seem to work reliably. If files are still showing as nobody
after the above changes, edit /etc/idmapd.conf
, ensure that Domain
is set to FQDN minus hostname
. For example:
/etc/idmapd.conf
[General] Domain = domain.ext [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = nsswitch
Problems with Vagrant and synced_folders
If you get an error about unuspported protocol, you need to enable NFS over UDP on your host (or make Vagrant use NFS over TCP.) See #UDP mounts not working.
If Vagrant scripts are unable to mount folders over NFS, installing the net-tools package may solve the issue.
Performance issues
This NFS Howto page has some useful information regarding performance. Here are some further tips:
Diagnose the problem
- Htop should be your first port of call. The most obvious symptom will be a maxed-out CPU.
- Press F2, and under «Display options», enable «Detailed CPU time». Press F1 for an explanation of the colours used in the CPU bars. In particular, is the CPU spending most of its time responding to IRQs, or in Wait-IO (wio)?
Close-to-open/flush-on-close
Symptoms: Your clients are writing many small files. The server CPU is not maxed out, but there is very high wait-IO, and the server disk seems to be churning more than you might expect.
In order to ensure data consistency across clients, the NFS protocol requires that the client’s cache is flushed (all data is pushed to the server) whenever a file is closed after writing. Because the server is not allowed to buffer disk writes (if it crashes, the client will not realise the data was not written properly), the data is written to disk immediately before the client’s request is completed. When you are writing lots of small files from the client, this means that the server spends most of its time waiting for small files to be written to its disk, which can cause a significant reduction in throughput.
See this excellent article or the nfs manpage for more details on the close-to-open policy. There are several approaches to solving this problem:
The nocto mount option
Warning: The Linux kernel does not seem to honor this option properly. Files are still flushed when they are closed.
If all of the following conditions are satisfied:
- The export you have mounted on the client is only going to be used by the one client.
- It does not matter too much if a file written on one client does not immediately appear on other clients.
- It does not matter if after a client has written a file, and the client thinks the file has been saved, and then the client crashes, the file may be lost.
Use the nocto mount option, which will disable the close-to-open behavior.
The async export option
Does your situation match these conditions?
- It is important that when a file is closed after writing on one client, it is:
- Immediately visible on all the other clients.
- Safely stored on the server, even if the client crashes immediately after closing the file.
- It is not important to you that if the server crashes:
- You may lose the files that were most recently written by clients.
- When the server is restarted, the clients will believe their recent files exist, even though they were actually lost.
In this situation, you can use async
instead of sync
in the server’s /etc/exports
file for those specific exports. See the exports manual page for details. In this case, it does not make sense to use the nocto
mount option on the client.
Buffer cache size and MTU
Symptoms: High kernel or IRQ CPU usage, a very high packet count through the network card.
This is a trickier optimisation. Make sure this is definitely the problem before spending too much time on this. The default values are usually fine for most situations.
See this article for information about I/O buffering in NFS. Essentially, data is accumulated into buffers before being sent. The size of the buffer will affect the way data is transmitted over the network. The Maximum Transmission Unit (MTU) of the network equipment will also affect throughput, as the buffers need to be split into MTU-sized chunks before they are sent over the network. If your buffer size is too big, the kernel or hardware may spend too much time splitting it into MTU-sized chunks. If the buffer size is too small, there will be overhead involved in sending a very large number of small packets. You can use the rsize and wsize mount options on the client to alter the buffer cache size. To achieve the best throughput, you need to experiment and discover the best values for your setup.
It is possible to change the MTU of many network cards. If your clients are on a separate subnet (e.g. for a Beowulf cluster), it may be safe to configure all of the network cards to use a high MTU. This should be done in very-high-bandwidth environments.
See NFS#Performance tuning for more information.
Debugging
Using rpcdebug
Using rpcdebug
is the easiest way to manipulate the kernel interfaces in place of echoing bitmasks to /proc.
Option | Description |
---|---|
-c | Clear the given debug flags |
-s | Set the given debug flags |
-m module | Specify which module’s flags to set or clear. |
-v | Increase the verbosity of rpcdebug’s output |
-h | Print a help message and exit. When combined with the -v option, also prints the available debug flags. |
For the -m option, the available modules are:
Module | Description |
---|---|
nfsd | The NFS server |
nfs | The NFS client |
nlm | The Network Lock Manager, in either an NFS client or server |
rpc | The Remote Procedure Call module, in either an NFS client or server |
Examples:
rpcdebug -m rpc -s all # sets all debug flags for RPC rpcdebug -m rpc -c all # clears all debug flags for RPC rpcdebug -m nfsd -s all # sets all debug flags for NFS Server rpcdebug -m nfsd -c all # clears all debug flags for NFS Server
Once the flags are set you can tail the journal for the debug output, usually by running journalctl -fl
as root or similar.
Using mountstats
The nfs-utils package contains the mountstats
tool, which can retrieve a lot of statistics about NFS mounts, including average timings and packet size.
$ mountstats Stats for example:/tank mounted on /tank: NFS mount options: rw,sync,vers=4.2,rsize=524288,wsize=524288,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,proto=tcp,port=0,timeo=15,retrans=2,sec=sys,clientaddr=xx.yy.zz.tt,local_lock=none NFS server capabilities: caps=0xfbffdf,wtmult=512,dtsize=32768,bsize=0,namlen=255 NFSv4 capability flags: bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions,pnfs=notconfigured NFS security flavor: 1 pseudoflavor: 0 NFS byte counts: applications read 248542089 bytes via read(2) applications wrote 0 bytes via write(2) applications read 0 bytes via O_DIRECT read(2) applications wrote 0 bytes via O_DIRECT write(2) client read 171375125 bytes via NFS READ client wrote 0 bytes via NFS WRITE RPC statistics: 699 RPC requests sent, 699 RPC replies received (0 XIDs not found) average backlog queue length: 0 READ: 338 ops (48%) avg bytes sent per op: 216 avg bytes received per op: 507131 backlog wait: 0.005917 RTT: 548.736686 total execute time: 548.775148 (milliseconds) GETATTR: 115 ops (16%) avg bytes sent per op: 199 avg bytes received per op: 240 backlog wait: 0.008696 RTT: 15.756522 total execute time: 15.843478 (milliseconds) ACCESS: 93 ops (13%) avg bytes sent per op: 203 avg bytes received per op: 168 backlog wait: 0.010753 RTT: 2.967742 total execute time: 3.032258 (milliseconds) LOOKUP: 32 ops (4%) avg bytes sent per op: 220 avg bytes received per op: 274 backlog wait: 0.000000 RTT: 3.906250 total execute time: 3.968750 (milliseconds) OPEN_NOATTR: 25 ops (3%) avg bytes sent per op: 268 avg bytes received per op: 350 backlog wait: 0.000000 RTT: 2.320000 total execute time: 2.360000 (milliseconds) CLOSE: 24 ops (3%) avg bytes sent per op: 224 avg bytes received per op: 176 backlog wait: 0.000000 RTT: 30.250000 total execute time: 30.291667 (milliseconds) DELEGRETURN: 23 ops (3%) avg bytes sent per op: 220 avg bytes received per op: 160 backlog wait: 0.000000 RTT: 6.782609 total execute time: 6.826087 (milliseconds) READDIR: 4 ops (0%) avg bytes sent per op: 224 avg bytes received per op: 14372 backlog wait: 0.000000 RTT: 198.000000 total execute time: 198.250000 (milliseconds) SERVER_CAPS: 2 ops (0%) avg bytes sent per op: 172 avg bytes received per op: 164 backlog wait: 0.000000 RTT: 1.500000 total execute time: 1.500000 (milliseconds) FSINFO: 1 ops (0%) avg bytes sent per op: 172 avg bytes received per op: 164 backlog wait: 0.000000 RTT: 2.000000 total execute time: 2.000000 (milliseconds) PATHCONF: 1 ops (0%) avg bytes sent per op: 164 avg bytes received per op: 116 backlog wait: 0.000000 RTT: 1.000000 total execute time: 1.000000 (milliseconds)
Kernel Interfaces
A bitmask of the debug flags can be echoed into the interface to enable output to syslog; 0 is the default:
/proc/sys/sunrpc/nfsd_debug /proc/sys/sunrpc/nfs_debug /proc/sys/sunrpc/nlm_debug /proc/sys/sunrpc/rpc_debug
Sysctl controls are registered for these interfaces, so they can be used instead of echo:
sysctl -w sunrpc.rpc_debug=1023 sysctl -w sunrpc.rpc_debug=0 sysctl -w sunrpc.nfsd_debug=1023 sysctl -w sunrpc.nfsd_debug=0
At runtime the server holds information that can be examined:
grep . /proc/net/rpc/*/content cat /proc/fs/nfs/exports cat /proc/net/rpc/nfsd ls -l /proc/fs/nfsd
A rundown of /proc/net/rpc/nfsd
(the userspace tool nfsstat
pretty-prints this info):
* rc (reply cache): <hits> <misses> <nocache>
- hits: client it's retransmitting
- misses: a operation that requires caching
- nocache: a operation that no requires caching
* fh (filehandle): <stale> <total-lookups> <anonlookups> <dir-not-in-cache> <nodir-not-in-cache>
- stale: file handle errors
- total-lookups, anonlookups, dir-not-in-cache, nodir-not-in-cache
. always seem to be zeros
* io (input/output): <bytes-read> <bytes-written>
- bytes-read: bytes read directly from disk
- bytes-written: bytes written to disk
* th (threads): <threads> <fullcnt> <10%-20%> <20%-30%> ... <90%-100%> <100%>
DEPRECATED: All fields after <threads> are hard-coded to 0
- threads: number of nfsd threads
- fullcnt: number of times that the last 10% of threads are busy
- 10%-20%, 20%-30% ... 90%-100%: 10 numbers representing 10-20%, 20-30% to 100%
. Counts the number of times a given interval are busy
* ra (read-ahead): <cache-size> <10%> <20%> ... <100%> <not-found>
- cache-size: always the double of number threads
- 10%, 20% ... 100%: how deep it found what was looking for
- not-found: not found in the read-ahead cache
* net: <netcnt> <netudpcnt> <nettcpcnt> <nettcpconn>
- netcnt: counts every read
- netudpcnt: counts every UDP packet it receives
- nettcpcnt: counts every time it receives data from a TCP connection
- nettcpconn: count every TCP connection it receives
* rpc: <rpccnt> <rpcbadfmt+rpcbadauth+rpcbadclnt> <rpcbadfmt> <rpcbadauth> <rpcbadclnt>
- rpccnt: counts all rpc operations
- rpcbadfmt: counts if while processing a RPC it encounters the following errors:
. err_bad_dir, err_bad_rpc, err_bad_prog, err_bad_vers, err_bad_proc, err_bad
- rpcbadauth: bad authentication
. does not count if you try to mount from a machine that it's not in your exports file
- rpcbadclnt: unused
* procN (N = vers): <vs_nproc> <null> <getattr> <setattr> <lookup> <access> <readlink> <read> <write> <create> <mkdir> <symlink> <mknod> <remove> <rmdir> <rename> <link> <readdir> <readdirplus> <fsstat> <fsinfo> <pathconf> <commit>
- vs_nproc: number of procedures for NFS version
. v2: nfsproc.c, 18
. v3: nfs3proc.c, 22
- v4, nfs4proc.c, 2
- statistics: generated from NFS operations at runtime
* proc4ops: <ops> <x..y>
- ops: the definition of LAST_NFS4_OP, OP_RELEASE_LOCKOWNER = 39, plus 1 (so 40); defined in nfs4.h
- x..y: the array of nfs_opcount up to LAST_NFS4_OP (nfsdstats.nfs4_opcount[i])
NFSD debug flags
/usr/include/linux/nfsd/debug.h
/* * knfsd debug flags */ #define NFSDDBG_SOCK 0x0001 #define NFSDDBG_FH 0x0002 #define NFSDDBG_EXPORT 0x0004 #define NFSDDBG_SVC 0x0008 #define NFSDDBG_PROC 0x0010 #define NFSDDBG_FILEOP 0x0020 #define NFSDDBG_AUTH 0x0040 #define NFSDDBG_REPCACHE 0x0080 #define NFSDDBG_XDR 0x0100 #define NFSDDBG_LOCKD 0x0200 #define NFSDDBG_ALL 0x7FFF #define NFSDDBG_NOCHANGE 0xFFFF
NFS debug flags
/usr/include/linux/nfs_fs.h
/* * NFS debug flags */ #define NFSDBG_VFS 0x0001 #define NFSDBG_DIRCACHE 0x0002 #define NFSDBG_LOOKUPCACHE 0x0004 #define NFSDBG_PAGECACHE 0x0008 #define NFSDBG_PROC 0x0010 #define NFSDBG_XDR 0x0020 #define NFSDBG_FILE 0x0040 #define NFSDBG_ROOT 0x0080 #define NFSDBG_CALLBACK 0x0100 #define NFSDBG_CLIENT 0x0200 #define NFSDBG_MOUNT 0x0400 #define NFSDBG_FSCACHE 0x0800 #define NFSDBG_PNFS 0x1000 #define NFSDBG_PNFS_LD 0x2000 #define NFSDBG_STATE 0x4000 #define NFSDBG_ALL 0xFFFF
NLM debug flags
/usr/include/linux/lockd/debug.h
/* * Debug flags */ #define NLMDBG_SVC 0x0001 #define NLMDBG_CLIENT 0x0002 #define NLMDBG_CLNTLOCK 0x0004 #define NLMDBG_SVCLOCK 0x0008 #define NLMDBG_MONITOR 0x0010 #define NLMDBG_CLNTSUBS 0x0020 #define NLMDBG_SVCSUBS 0x0040 #define NLMDBG_HOSTCACHE 0x0080 #define NLMDBG_XDR 0x0100 #define NLMDBG_ALL 0x7fff
RPC debug flags
/usr/include/linux/sunrpc/debug.h
/* * RPC debug facilities */ #define RPCDBG_XPRT 0x0001 #define RPCDBG_CALL 0x0002 #define RPCDBG_DEBUG 0x0004 #define RPCDBG_NFS 0x0008 #define RPCDBG_AUTH 0x0010 #define RPCDBG_BIND 0x0020 #define RPCDBG_SCHED 0x0040 #define RPCDBG_TRANS 0x0080 #define RPCDBG_SVCXPRT 0x0100 #define RPCDBG_SVCDSP 0x0200 #define RPCDBG_MISC 0x0400 #define RPCDBG_CACHE 0x0800 #define RPCDBG_ALL 0x7fff
See also
- rpcdebug(8)
- http://utcc.utoronto.ca/~cks/space/blog/linux/NFSClientDebuggingBits
- http://www.novell.com/support/kb/doc.php?id=7011571
- http://stromberg.dnsalias.org/~strombrg/NFS-troubleshooting-2.html
- http://www.opensubscriber.com/message/nfs@lists.sourceforge.net/7833588.html
- Forum
- The Ubuntu Forum Community
- Ubuntu Specialised Support
- Ubuntu Servers, Cloud and Juju
- Server Platforms
- [ubuntu] NFS setup problem (RPC not registered)
-
NFS setup problem (RPC not registered)
Hello,
I’ve followed several guides and am trying to get NFS working on my local network.
For starters, I think I should be able to NFS mount a drive on the server (192.168.1.65) TO the server. From there I’ll work with another client (currently 192.168.1.80)
Code:
$cat /etc/exports /media 192.168.1.0/24(rw,no_subtree_check) $cat /etc/hosts.deny portmap mountd nfsd statd lockd rquotad : ALL $cat /etc/hosts.allow ALL : 192.168.1.65/255.255.255.0, 192.168.1.80/255.255.255.0
ran
Code:
$sudo exportfs -ra $sudo /etc/init.d/nfs-kernel-server restart $sudo /etc/init.d/portmap restart $sudo mount 192.168.1.65:/media blah mount.nfs: mount to NFS server '192.168.1.65:/media' failed: RPC Error: Program not registered
from the client:
Code:
$sudo mount 192.168.1.65:/media blah mount.nfs: mount to NFS server '192.168.1.65:/media' failed: RPC Error: Program not registered
<tears hair out>
The guides I’ve read seem simple, so I must be missing something easy. Can’t figure it out, so I’m looking for help after googling for 2 hours. Thanks
-
Re: NFS setup problem (RPC not registered)
do you see NFS if you query with rpcinfo -p
-
Re: NFS setup problem (RPC not registered)
Maybe try in /etc/hosts.allow:
Code:
ALL:192.168.1.65 ALL:192.168.1.80
-
Re: NFS setup problem (RPC not registered)
If you use UFW firewall make sure that you’r clients have the permission to connect on server on all ports.
-
Re: NFS setup problem (RPC not registered)
Code:
sudo apt-get update sudo apt-get install portmap
-
Re: NFS setup problem (RPC not registered)
Oh yes, after editing the exports file, have you:
Code:
sudo exportfs -a sudo /etc/init.d/nfs-kernel-server restart sudo /etc/init.d/portmap restart
?
-
Re: NFS setup problem (RPC not registered)
Originally Posted by jgarner
do you see NFS if you query with rpcinfo -p
yes
Originally Posted by Jive Turkey
Maybe try in /etc/hosts.allow:
Code:
ALL:192.168.1.65 ALL:192.168.1.80
no difference
Originally Posted by ene_dene
If you use UFW firewall make sure that you’r clients have the permission to connect on server on all ports.
don’t know what UFW is, probably not (?) using it
Originally Posted by KiLaHuRtZ
Code:
sudo apt-get update sudo apt-get install portmap
up to date
Originally Posted by ene_dene
Oh yes, after editing the exports file, have you:
Code:
sudo exportfs -a sudo /etc/init.d/nfs-kernel-server restart sudo /etc/init.d/portmap restart
?
yes
Now I see:
Code:
$ sudo mount 192.168.1.65:/media blah -v mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Sat May 1 22:57:09 2010 mount.nfs: text-based options: 'addr=192.168.1.65' mount.nfs: mount(2): Input/output error mount.nfs: mount system call failed
but if I tail dmesg, I see:
Code:
[11534.140022] rpcbind: server 192.168.1.65 not responding, timed out
which is very different from before. Now running from the client machine (192.168.1.80)
Code:
$ sudo mount 192.168.1.65:/media tmp/ -v mount: no type was given - I'll assume nfs because of the colon mount: wrong fs type, bad option, bad superblock on 192.168.1.65:/media, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try dmesg | tail or so
this just seems like a b0rked mount, though
-
Re: NFS setup problem (RPC not registered)
I don’t know where the problem is, but here is how I would do it.
Assumptions:
server address: 192.168.1.65
client address: 192.168.1.80On a server I’d install portmap, nfs-kernel-server.
Edited the /etc/exports file and put:Code:
/media 192.168.0.2(rw,sync,no_subtree_check)
then exportfs -a as you have done, restart portmap and nfs-kernel-server as you have done.
Now on server you need to check if your UFW firewall is running (check this first, before you did anything, maybe this is the problem). You do this by:
If the output is:
Than you can, either disable the firewall (sudo ufw disable), which I wouldn’t do, or you can allow all traffic from local network to server by:Code:
sudo ufw allow from 192.168.1.0/24
To me, it’s easier with firewall anyway, than with host.allow, deny. If you use firewall you don’t need to put anything in host.allow, deny.
Now, you don’t have to mount anything on server if you just want to share /media folder.
On client, you have to have installed nfs-client. Then you just mount the drive:
Code:
sudo mount -t nfs 192.168.1.80:/media /folder_of_your_choice
Of course, that folder needs to exist on client, and it probably needs to be empty.
Btw, I have it configured in shown way, and it worked with Ubuntu server 9.10 and now works with Ubuntu server 10.04.
-
Re: NFS setup problem (RPC not registered)
I appreciate the clear responses, and it seems that this should be no problem from the descriptions.
ufw ISN’T running
Code:
$ sudo ufw status Status: inactive
after running (on the client):
Code:
$ sudo mount 192.168.1.65:/media blah -v mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Sun May 2 11:44:24 2010 mount.nfs: text-based options: 'addr=192.168.1.65' mount.nfs: mount(2): Input/output error mount.nfs: mount system call failed
from the server I see this:
Code:
tail dmesg [57386.570805] nfsd: last server has exited, flushing export cache [57387.795916] svc: failed to register lockdv1 RPC service (errno 97). [57387.797685] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [57387.797712] NFSD: starting 90-second grace period
I also note that it doesn’t matter what I choose for the SOURCE directory to mount
192.168.1.65:/home
gives the same output as
192.168.1.65:/mediafyi:
Code:
$ sudo exportfs -rva exporting 192.168.0.2:/media
maybe I’ll upgrade to 10.04, wipe if off the server and start over cleanly. Seems like it must be something silly
-
Re: NFS setup problem (RPC not registered)
I’m sorry, but I don’t know where the problem is. Perhaps someone with more experience with NFS errors could be of better help.
I hope upgrading will change something.
Bookmarks
Bookmarks
Posting Permissions
10 More Discussions You Might Find Interesting
1. Solaris
Solaris 10 NFS client cannot mount a share from a Windows server
I have a Solaris 10 server, I’m trying to mount a share from a Windows nfs server. If I add this entry (tst-walnut:/test_sap_nfs — /majid nfs — yes rw,soft) to my /etc/vfstab, then I can mount, but when I create a file by root:root, the file owner changes to… (1 Reply)
Discussion started by: Hiroshi
2. Shell Programming and Scripting
Mount NFS Share On NFS Client via bash script.
I need a help of good people with effective bash script to mount nfs shared,
By the way I did the searches, since i haven’t found that someone wrote a script like this in the past, I’m sure it will serve more people.
The scenario as follow:
An NFS Client with Daily CRON , running bash script… (4 Replies)
Discussion started by: Brian.t
3. Solaris
nfs mount: RPC: Rpcbind failure — RPC: Timed out
Fails to mount the server (10.125.224.22) during installation of a software on client, throwing the below error:
nfs mount: 10.125.224.22: : RPC: Rpcbind failure — RPC: Timed out
nfs mount: retrying: /cdrom
This happened after complete shutdown of the lab. The server came up fine but most… (1 Reply)
Discussion started by: frintocf
4. Red Hat
Not able to mount NFS share on client side
When i tried to mount the nfs i see this error message
mount -t nfs 192.168.20.194:/remote/proj1 /nfsmount
mount: 192.168.20.194:/remote/proj1 failed, reason given by server: Permission denied
and the /etc/exports file in the host side looks like this
/remote/proj1 … (12 Replies)
Discussion started by: srinathk
5. AIX
can not mount from aix client to linux nfs server
Hi,
I am trying to mount a nfs folder from AIX client to Linux NFS Server, but I got the following error:
# mount 128.127.11.121:/aix /to_be_del
mount: 1831-010 server 128.127.11.121 not responding: RPC: 1832-018 Port mapper
failure — RPC: 1832-008 Timed out
mount: retrying… (1 Reply)
Discussion started by: victorcheung
6. UNIX for Advanced & Expert Users
Unusual NFS mount problem on only ONE client: Red Hat WS Rel 3
This is an unusual situation where I have an NFS server currently serving out MULTIPLE clients over several variants of Linux and UNIX successfully (world permissions) except for a SINGLE client. Even the other Linux (SuSE) clients in the same room are mounting successfully with defaults without… (6 Replies)
Discussion started by: neelpert1
7. Shell Programming and Scripting
NFS client Mount script after boot
Hi,
I have 12 AIX P series servers. One has the NFS DB2data and the others are client mounts. NFS is not in /etc/filesystem because if NFS DB2data not up the client takes 7+ minutes to give up on nfsmnt and boot up.
I’d like to check that nfs is up, then do the client mount all from a startup… (0 Replies)
Discussion started by: sv2xs
8. UNIX for Dummies Questions & Answers
can not get netapp to mount RPC Not registered error
getting «NFS mount: netapp : RPC: Program not registered» error
searched the site but none of the fixes from previous threads are helping (2 Replies)
Discussion started by: calamine
9. UNIX for Advanced & Expert Users
clnttcp_create: RPC program not registered
s/o=SCO 5.05 openserver
hi, i have a problem with a mount between 2 servers
i can see the mounted files, but i can�t open it if the file is a *.dbf,
if i try to a «dbf» file with fox for unix the error is (the system has reached the maximum number of blocks)
mount -f NFS… (1 Reply)
Discussion started by: jav_v
10. UNIX for Dummies Questions & Answers
«rpc program not registered»??
Hello all,
When I shut down my X86/Solaris 8, I get the following
messages on the console:
«rpc program not registered «……
What is mean?why?May someone can tell me
Thanks in advance. (1 Reply)