Settin up nfsen
Error
inoue@netflowc:~/nfsen-1.3.6p1$ ./install.pl
Can't locate RRDs.pm in @INC (you may need to install the RRDs module) (@INC contains: ./libexec ./installer-items /etc/perl /usr/local/lib/perl/5.18.2 /usr/local/share/perl/5.18.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.18 /usr/share/perl/5.18 /usr/local/lib/site_perl .) at libexec/NfSenRRD.pm line 38.
BEGIN failed--compilation aborted at libexec/NfSenRRD.pm line 38.
Compilation failed in require at libexec/NfSen.pm line 43.
BEGIN failed--compilation aborted at libexec/NfSen.pm line 43.
Compilation failed in require at ./install.pl line 46.
BEGIN failed--compilation aborted at ./install.pl line 46.
Error
me@netflowc:~/nfsen-1.3.6p1$ sudo perl -MCPAN -e 'install Mail::Header'
CPAN.pm requires configuration, but most of it can be done automatically.
If you answer 'no' below, you will enter an interactive dialog for each
configuration option instead.
Would you like to configure as much as possible automatically? [yes]
ALERT: 'make' is an essential tool for building perl Modules.
Please make sure you have 'make' (or some equivalent) working.
Autoconfigured everything but 'urllist'.
Now you need to choose your CPAN mirror sites. You can let me
pick mirrors for you, you can select them from a list or you
can enter them by hand.
Would you like me to automatically choose some CPAN mirror
sites for you? (This means connecting to the Internet) [yes]
Trying to fetch a mirror list from the Internet
Fetching with HTTP::Tiny:
http://www.perl.org/CPAN/MIRRORED.BY
Looking for CPAN mirrors near you (please be patient)
.......................... done!
New urllist
http://noodle.portalus.net/CPAN/
http://cpan.develooper.com/
http://mirrors.gossamer-threads.com/CPAN/
Autoconfiguration complete.
commit: wrote '/home/inoue/.cpan/CPAN/MyConfig.pm'
You can re-run configuration any time with 'o conf init' in the CPAN shell
Fetching with HTTP::Tiny:
http://noodle.portalus.net/CPAN/authors/01mailrc.txt.gz
Reading '/home/inoue/.cpan/sources/authors/01mailrc.txt.gz'
............................................................................DONE
Fetching with HTTP::Tiny:
http://noodle.portalus.net/CPAN/modules/02packages.details.txt.gz
Reading '/home/inoue/.cpan/sources/modules/02packages.details.txt.gz'
Database was generated on Wed, 04 Nov 2015 04:29:02 GMT
HTTP::Date not available
..............
New CPAN.pm version (v2.10) available.
[Currently running version is v2.00]
You might want to try
install CPAN
reload cpan
to both upgrade CPAN.pm and run the new version without leaving
the current session.
..............................................................DONE
Fetching with HTTP::Tiny:
http://noodle.portalus.net/CPAN/modules/03modlist.data.gz
Reading '/home/inoue/.cpan/sources/modules/03modlist.data.gz'
DONE
Writing /home/inoue/.cpan/Metadata
Running install for module 'Mail::Header'
Running make for M/MA/MARKOV/MailTools-2.14.tar.gz
Fetching with HTTP::Tiny:
http://noodle.portalus.net/CPAN/authors/id/M/MA/MARKOV/MailTools-2.14.tar.gz
Fetching with HTTP::Tiny:
http://noodle.portalus.net/CPAN/authors/id/M/MA/MARKOV/CHECKSUMS
Checksum for /home/inoue/.cpan/sources/authors/id/M/MA/MARKOV/MailTools-2.14.tar.gz ok
CPAN.pm: Building M/MA/MARKOV/MailTools-2.14.tar.gz
Checking if your kit is complete...
Looks good
Writing Makefile for Mail
Writing MYMETA.yml and MYMETA.json
MARKOV/MailTools-2.14.tar.gz
make -- NOT OK
'YAML' not installed, will not store persistent state
Running make test
Can't test without successful make
Running make install
Make had returned bad status, install seems impossible
Apache, PHP and modules
sudo apt-get install make gcc
apt-get install apache2 libapache2-mod-php5 php5-common libmailtools-perl rrdtool librrds-perl
sudo perl -MCPAN -e 'install Mail::Header'
sudo cpan YAML
adduser netflow
adduser www
User ‘netflow’ not a member of group ‘www’
sudo usermod -g www-data netflow
nfsen.conf
The portion to change:
# Required for default layout
$BASEDIR = "/disk/nfsen"
# Run nfcapd as this user
# This may be a different or the same uid than your web server.
# Note: This user must be in group $WWWGROUP, otherwise nfcapd
# is not able to write data files!
$USER = "netflow";
# user and group of the web server process
# All netflow processing will be done with this user
# Apache default is "www-data"
$WWWUSER = "www-data";
$WWWGROUP = "www-data";
# nfdump tools path
#$PREFIX = '/usr/local/bin';
$PREFIX = '/usr/bin';
# Netflow sources
# Define an ident string, port and colour per netflow source
#
# Required parameters:
# ident identifies this netflow source. e.g. the router name,
# Upstream provider name etc.
# port nfcapd listens on this port for netflow data for this source
# set port to '0' if you do not want a collector to be started
# col colour in nfsen graphs for this source
#
# Optional parameters
# type Collector type needed for this source. Can be 'netflow' or 'sflow'. Default is netflow
# optarg Optional args to the collector at startup
#
# Syntax:
# 'ident' => { 'port' => '<portnum>', 'col' => '<colour>', 'type' => '<type>' }
# Ident strings must be 1 to 19 characters long only, containing characters [a-zA-Z0-9_].
%sources = (
'gin' => { 'port' => '5678', 'col' => '#0000ff', 'type' => 'netflow' },
'sim' => { 'port' => '9001', 'col' => '#ff0000', 'type' => 'netflow' },
);
Config
sudo /data/nfsen/bin/nfsen reconfig
Start
sudo /data/nfsen/bin/nfsen start
Change apache’s DocumentRoot
to the same as nfsen
configuration file — /var/www
vi /etc/apache2/sites-enabled/000-default.conf
#Access to the web interface
http://<server_ip>/nfsen/nfsen.php
But, on the web interface,
ERROR: nfsend connect() error: Permission denied!
ERROR: nfsend - connection failed!!
ERROR: Can not initialize globals!
This will solve — workaround
inoue@netflowc:/data/nfsen/var/run$ file *
nfsen.comm: socket
nfsend.pid: ASCII text
p2055.pid: ASCII text
p9001.pid: ASCII text
inoue@netflowc:/data/nfsen/var/run$ sudo chmod 777 ./nfsen.comm
#Changing graph color
I found this way in the nfsen mailing list archive.
I’ve had better success changing these colors in the web interface. Click
on the stats tab for the live profile and edit each channel list. You can
change the color there also.
Please change the colours using the web interface. The colours in nfsen.conf are for initial ( installation ) purpose
only and will actually no longer be used in 1.3. They will get removed in future versions.
Apache2 setting
/etc/apache2/sites-available/000-default.conf
DocumentRoot /var/www
And go to http"/<tour_server>/nfsen/nfsen.php
You need to run the command on the server after modifying the /etc/exports
file:
$ exportfs -a
Also when debugging connectivity issues with NFS you can run the command showmount -e <nfs server>
to see what mounts a given server is exporting out.
example
$ showmount -e cobbler
Export list for cobbler:
/cobbler/isos 192.168.1.0/24
services running on nfs clients
You need to make sure that you have the following services running so that the clients can communicate with the NFS server:
$ chkconfig --list|grep rpc
rpcbind 0:off 1:off 2:on 3:on 4:on 5:on 6:off
rpcgssd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
rpcidmapd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
rpcsvcgssd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
As well as this one:
$ chkconfig --list|grep nfs
nfs 0:off 1:off 2:off 3:off 4:off 5:off 6:off
nfslock 0:off 1:off 2:off 3:on 4:on 5:on 6:off
rpcinfo
With the above services running you should be able to check that the client can make remote procedure calls (rpc) to the NFS server like so:
$ rpcinfo -p cobbler
program vers proto port service
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 807 status
100024 1 tcp 810 status
100011 1 udp 718 rquotad
100011 2 udp 718 rquotad
100011 1 tcp 721 rquotad
100011 2 tcp 721 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100021 1 udp 60327 nlockmgr
100021 3 udp 60327 nlockmgr
100021 4 udp 60327 nlockmgr
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100021 1 tcp 57752 nlockmgr
100021 3 tcp 57752 nlockmgr
100021 4 tcp 57752 nlockmgr
100005 1 udp 750 mountd
100005 1 tcp 753 mountd
100005 2 udp 750 mountd
100005 2 tcp 753 mountd
100005 3 udp 750 mountd
100005 3 tcp 753 mountd
mounting and the kernel modules
I see what you wrote in an answer that you then deleted. You should’ve added that info to the question!
I can see where you were getting stumped now. I don’t believe you’re suppose to be mounting using:
$ mount -t nfsd ...
that should be:
$ mount t nfs ...
Try changing that. Also I see where you were ultimately getting stumped. You didn’t have the nfs kernel module loaded.
$ modprobe nfs
On Ubuntu version 17.04, my NFS shares are defined as follows:
Configuration
In /etc/exports
:
/bottle/media 192.168.0.0/16(ro,all_squash,no_subtree_check,anonuid=65534,anongid=65534) 10.3.0.0/16(rw,all_squash,sync,no_subtree_check,anonuid=65534,anongid=65534)
UNIX file permissions for the shared volume:
$ ls -al /bottle
total 5
drwxr-xr-x 3 root root 3 Sep 3 11:45 .
drwxr-xr-x 28 root root 4096 Sep 3 00:37 ..
drwxrwxr-x 2 nobody nogroup 2 Sep 3 11:45 media
Verification
Ran sudo exportfs
:
/bottle/media 192.168.0.0/16
/bottle/media 10.3.0.0/24
Checked the NFS server daemon:
$ sudo systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: active (exited) since Sun 2017-09-03 12:09:47 BST; 16min ago
Process: 23350 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
Process: 23344 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
Process: 23337 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
Process: 23380 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 23374 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 23380 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
Memory: 0B
CPU: 0
CGroup: /system.slice/nfs-server.service
Sep 03 12:09:47 monolith systemd[1]: Starting NFS server and services...
Sep 03 12:09:47 monolith systemd[1]: Started NFS server and services.
Verified that the UID/GID settings correspond to nobody
and nogroup
, respectively:
$ id -u nobody
65534
$ getent group nogroup
nogroup:x:65534:
Symptoms
The NFS server host is located at 10.3.0.100
. The client (OSX Sierra v10.12.6) is at 10.3.0.102
.
I attempted a connection using finder’s «Connect to Server» dialogue (cmd + k
), into which I entered nfs://10.3.0.100
.
Doing so yields the following error: You do not have permission to access this server
.
Is this a configuration problem? What have I done wrong?
Привет.
Бьюсь, который день. Весь инет облазил, кругом, по сути, пишут одно и то же. Не могу примонтировать директорию по nfs.
Итак, что у меня есть:
Сервер:
debian lenny с внешним ip (светится в сеть)
Клиент:
debian lenny — установлен в качестве гостевой системы на virtualbox
Пинг серверной машины на клиентской проходит успешно. Клиентская машина спокойно выходит в инет
Задача: примонтировать експортируемую директорию сервера на клиентской машине
Мои шаги:
На сервере установил nfs-kernel-server nfs-common portmap
На клиенте установил nfs-common portmap
конфиги:
/etc/hosts.deny
здесь все закомментировано
Код: Выделить всё
# /etc/hosts.deny: list of hosts that are _not_ allowed to access the system.
# See the manual pages hosts_access(5) and hosts_options(5).
#
# Example: ALL: some.host.name, .some.domain
# ALL EXCEPT in.fingerd: other.host.name, .other.domain
#
# If you're going to protect the portmapper use the name "portmap" for the
# daemon name. Remember that you can only use the keyword "ALL" and IP
# addresses (NOT host or domain names) for the portmapper, as well as for
# rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8)
# for further information.
#
# The PARANOID wildcard matches any host whose name does not match its
# address.
# You may wish to enable this to ensure any programs that don't
# validate looked up hostnames still leave understandable logs. In past
# versions of Debian this has been the default.
# ALL: PARANOID
/etc/hosts.allow
Код: Выделить всё
# /etc/hosts.allow: list of hosts that are allowed to access the system.
# See the manual pages hosts_access(5) and hosts_options(5).
#
# Example: ALL: LOCAL @some_netgroup
# ALL: .foobar.edu EXCEPT terminalserver.foobar.edu
#
# If you're going to protect the portmapper use the name "portmap" for the
# daemon name. Remember that you can only use the keyword "ALL" and IP
# addresses (NOT host or domain names) for the portmapper, as well as for
# rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8)
# for further information.
#
portmap:ALL
lockd:ALL
rquotad:ALL
mountd:ALL
statd:ALL
/etc/exports
Код: Выделить всё
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/home/user/nfs *(ro,sync,no_root_squash,,subtree_check)
* — разрешить всем ip адресам
rw — чтение/запись
sync — синхронный доступ
subtree_check
no_root_squash — запрещен доступ под root’ом
перегружаю nfs демон
вроде бы сервен настроен и экспортирует директорию /home/user/nfs
Теперь на клиенте пытаюсь примонтировать удаленную директорию
# mount server_ip:/home/user/nfs /home/user/server_dir
на это получаю:
mount.nfs: server_ip:/home/user/nfs failed, reason given by server: Permission denied
В инете встречал, что такая ошибка встречается, если экспортируемые директории не правильно настроены в файле /etc/exports
Вообщем, как правильно настроить???
Да, еще хотел сказать, что клиентская машина хочет получить доступ к расшаренной папке на сервере через интернет, может это есть причина ошибки монтирования?
I have a problem mounting a NFS share that I can’t solve that is driving me nuts. This is the situation:
Three machines involved:
Host A: mandrake, IP 192.168.1.4, NFS server
Host B: athlon64, IP 192.168.1.64, NFS client
Host C: lap-fzs-2, IP 192.168.1.27, NFS client
Host A has an NFS server running which exports a directory that gets mounted by host B. This works flawless and has been working since ages. No problemo. Now host C comes into the picture. Ubuntu 12.04 LTS, modern system. I tried to mount the same share from host A but get a permission denied error:
root@lap-fzs-2:~# mount -t nfs mandrake:/data /data -onfsvers=2
mount.nfs: access denied by server while mounting mandrake:/data
The fact that it works between hosts A and B should be prove that the NFS export per se is working. Here is the info I can give that makes me think it should work. Maybe someone sees what I don’t and knows why this fails on the new host C.
Server exports:
[root@mandrake /root]# cat /etc/exports
/suse 192.168.1.0/16(ro,no_root_squash)
/data 192.168.1.0/24(rw)
#/data3 192.168.2.0/24(rw)
#/data 192.168.2.0/16(rw,all_squash,anonuid=500,anongid=500)
#/data3 192.168.2.0/16(rw,all_squash,anonuid=500,anongid=500)
[root@mandrake /root]# exportfs
/suse 192.168.1.0/16
/data 192.168.1.0/24
The portmapper is running, the exports are known and mounted by host B «athlon64».
[root@mandrake /root]# showmount -e
Export list for mandrake:
/data 192.168.1.0/24
/suse 192.168.1.0/16
[root@mandrake /root]# showmount -a
All mount points on mandrake:
atlhon64.acme.local:/data
When the athlon64 host mounts the NFS share, the server log shows success:
Feb 11 20:06:46 mandrake mountd[460]: authenticated mount request from atlhon64.acme.local:770 for /data (/data)
But when the host C tries to mount the same share, the server log shows:
Feb 11 20:12:42 mandrake mountd[460]: refused mount request from lap-fzs-2 for /data (/): no export entry
Host C sees the server, reaches the portmapper and the nfsd, but fail at the permissions.
root@lap-fzs-2:~# showmount -e 192.168.1.4
Export list for 192.168.1.4:
/data 192.168.1.0/24
/suse 192.168.1.0/16
root@lap-fzs-2:~# mount -t nfs -v mandrake:/data /data -onfsvers=2,proto=udp
mount.nfs: timeout set for Mon Feb 11 21:49:23 2013
mount.nfs: trying text-based options 'nfsvers=2,proto=udp,addr=192.168.1.4'
mount.nfs: prog 100003, trying vers=2, prot=17
mount.nfs: trying 192.168.1.4 prog 100003 vers 2 prot UDP port 2049
mount.nfs: prog 100005, trying vers=1, prot=17
mount.nfs: trying 192.168.1.4 prog 100005 vers 1 prot UDP port 636
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting mandrake:/data
I have to use NFSv2 on the client. Using NFSv4 will fail as the server doesn’t support it. It fails as it tries to connect via TCP directly to 2049 but the port isn’t open. No fallback happens. Using NFSv3 will result in a RPC program/version mismatch.
What am I missing?
Update:
All three machines are on one LAN, on the same switch. There is no firewall active on host C:
root@lap-fzs-2:~# iptables -vnL
Chain INPUT (policy ACCEPT 17 packets, 1853 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 20 packets, 5611 bytes)
pkts bytes target prot opt in out source destination
Nor on host A:
[root@mandrake /root]# ipchains -L
Chain input (policy ACCEPT):
Chain forward (policy ACCEPT):
Chain output (policy ACCEPT):
I have Ubuntu and CentOS clients nfsv3 mounting to a FreeBSD box, which got rebooted while the nfs clients were connected. Now the clients get a permission denied when they try to access the mount points.
On the client I have tried
# umount /nobackup/dat
umount.nfs: /nobackup/dat: device is busy
umount.nfs: /nobackup/dat: device is busy
# fuser /nobackup/dat
Cannot stat file /proc/1660/fd/473: Stale NFS file handle
Cannot stat file /proc/1660/fd/475: Stale NFS file handle
Cannot stat file /proc/1660/fd/476: Stale NFS file handle
Cannot stat file /proc/1660/fd/478: Stale NFS file handle
Cannot stat file /proc/1660/fd/479: Stale NFS file handle
Cannot stat file /proc/14509/fd/1: Stale NFS file handle
Cannot stat file /proc/14674/fd/1: Stale NFS file handle
Cannot stat file /proc/14871/fd/1: Stale NFS file handle
Cannot stat file /proc/27872/fd/436: Stale NFS file handle
Cannot stat file /proc/27872/fd/444: No such file or directory
# umount -f /nobackup/dat
umount2: Device or resource busy
umount.nfs: /nobackup/dat: device is busy
umount2: Device or resource busy
umount.nfs: /nobackup/dat: device is busy
Update
Now I have killed all the processes and successfully unmounted /nobackup/dat
, but I still get the permission denied error for some reason.
# fuser -m /nobackup/dat 2>&1 | awk -F'/' '{print $3}' | xargs -n 1 kill
# fuser -m /nobackup/dat
# umount -l /nobackup/dat
# ll /nobackup/dat
ls: cannot open directory /nobackup/dat: Permission denied
# mount /nobackup/dat
mount.nfs: access denied by server while mounting (null)
Question
Any suggestions how to debug this?
asked Dec 16, 2013 at 12:44
SandraSandra
10.1k38 gold badges108 silver badges163 bronze badges
2
The problem is that the clients didn’t realise that the nfs server went away, so they’re still trying to access the filehandle that was originally created the previous time they mounted the file system.
Normally, rebooting the client is a sure way of making it remount the file systems. But if you don’t want to do that, start by killing all processes that are trying to use the NFS file systems. After that, you can try a «lazy umount» with
umount -l
You might also try to remount the filesystem, using
mount -o remount
Otherwise, the old file handles will timeout at some point, though I don’t know how long it will take.
Once you’ve successfully gotten rid of the stale filehandles, remount the filesystems:
mount nobackup/dat
answered Dec 16, 2013 at 13:20
Jenny DJenny D
27.5k21 gold badges74 silver badges112 bronze badges
4