# NFS - Network File System đź–§
In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. It’s very simple to set up and you can toggle readonly on shares you don’t need to be writeable. - NFS vs Samba
NFS & Symlink
Symbolic links only contain a path to another file or directory on the originating system where they’re being shared from. Unless you take care to make the links relative or to duplicate the same directory structures on remote systems as the originator of the share, they simply will not work.
This can be easily fixed: see symlink
Client Configuration đź’»
Basics
List shared for a given server
$ showmount -e [server ip]
Mount Shared Directory (Manual)
$ sudo mkdir -p /nfs/backup
$ sudo mount 192.168.0.125:/mnt/Backup /nfs/backup
or
- use permanent mount
- use autofs / automount (see below)
Auto-mounting an NFS share
Automount (systemd)
The main difference is that autofs, with the right auto scripts, will dynamically list the available shares. So you don’t need to pre-define and hard-code which machines/shares should be made available.
With systemd’s automount, only shares which you have pre-configured will be visible.
Install
$ sudo apt install nfs-common
Autofs Setup
$ sudo apt-get install autofs
Add the following line at the end of /etc/auto.master
/nfs /etc/auto.nfs --ghost,--timeout=30
Create /etc/auto.nfs, and specify nfs mount options:
<server-name> -fstype=nfs4,soft <server-ip/host>:/
tronaut -fstype=nfs4,soft tronaut:/
Reload /etc/init.d/autofs
$ systemctl start autofs
# now test that it works
$ ls /nfs/tronaut/ # pressing tab should show shared available
see also FSCache to minimize network traffic for users accessing data from a file system mounted over the network.
Host Configuration đź–Ą
Install
$ sudo apt install nfs-kernel-server
sudo micro
/etc/exports
directory_to_share client(share_option1,...,share_optionN)
Then restart service: sudo systemctl restart nfs-kernel-server
Example
Setup user specific folder on NAS
On Server create ZFS dataset / NFS Share
$ zfs create -o mountpoint=/mnt/tronaut-yves storage_pool/tronaut-yves
# add it to NFS exports (/etc/exports)
$ sudo micro /etc/exports
# /mnt/tronaut-yves *(rw,sync,no_subtree_check,no_root_squash)
$ sudo systemctl restart nfs-kernel-server
Options details
** * ** Allow all clients to connect, otherwise you can specify clients (by IP, hostname, subnet or domain).
rw: This option gives the client computer both read and write access to the volume.
sync: This option forces NFS to write changes to disk before replying. This results in a more stable and consistent environment since the reply reflects the actual state of the remote volume. However, it also reduces the speed of file operations.
no_subtree_check: This option prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
no_root_squash: By default, NFS translates requests from a root user remotely into a non-privileged user on the server. This was intended as security feature to prevent a root account on the client from using the file system of the host as root. no_root_squash disables this behavior for certain shares.
see also
- Export a partial subtree - use bind mount to only export a subtree through NFS.
Refs
Use fs-cache + cachefilesd with NFS
This can improve speed, principal goal is to decrease network traffic.
** It does not give offline access to data** - no onedrive.
Needs FS-Cache needs to be installed locally
$ sudo apt install cachefilesd
And it must be enabled /etc/default/cachefilesd
-> RUN=yes
and then sudo systemctl enable --now cachefilesd
Now when mounting nfs, use -o fsc
to enables FS-Cache support
sudo mount -o fsc yourserver:/yourpath /mnt/nfs
Permanent Mount
Avoid? Autofs mount for permanent mount
- because if you do the content is unreliable, the mount point having been removed.
- do not share mount point between autofs and cachedfs (do not use
/nfs
if this is the autofs mount point)
On Client
$ mkdir yves ~/Tronaut
# manual mount
$ sudo mount -t nfs4 -o fsc tronaut:/mnt/tronaut-yves /home/yves/Tronaut
or create a systemd mount unit
systemd .mount unit filenames must be the escaped version of the mount path.
$ systemd-escape -p --suffix=mount /home/yves/Tronaut
=> home-yves-Tronaut.mount
in /etc/systemd/system/home-yves-Tronaut.mount
# /etc/systemd/system/home-yves-Tronaut.mount
[Unit]
Description=Mount Tronaut NFS Share for yves
After=network-online.target
Wants=network-online.target
[Mount]
What=tronaut:/mnt/tronaut-yves
Where=/home/yves/Tronaut
Type=nfs4
Options=_netdev,fsc,auto,nofail,x-systemd.automount,x-systemd.device-timeout=10,timeo=14
[Install]
WantedBy=multi-user.target
$ sudo systemctl daemon-reexec
$ sudo systemctl enable home-tronaut-yves-mnt-share.mount
$ sudo systemctl start home-yves-Tronaut.mount
Config
sudo xed /etc/cachefilesd.conf
Default backend is in /var/cache/fscache
There are 6 control in /etc/cachefilesd.conf
Configure Cache culling:
brun N% (percentage of blocks) & frun N% (percentage of files): this describes the amount of free space and the number of available files. If these values cache rise above the set percentage, then culling is turned off
bcull N% (percentage of blocks) & fcull N% (percentage of files): describes the amount of available space or the number of files and if these values fall below the set limit, then cull is started.
bstop N% (percentage of blocks) & fstop N% (percentage of files): here if the amount of available space or the number of available files in the cache falls below either of these limits then the allocation of disk space stops until the limits are raised above the set percentage.
Handling host unavailability
When an NFS share becomes unavailable, it will cause any application trying to access the files to hang (including ls).
systemd automount
hard retry failed requests indefinitely until the server responds — effectively blocking until success.
timeo=5 Timeout in tenths of a second (i.e., 0.5 seconds).
retrans=2 Retry a request this many times before giving up.
intr The intr option in NFS mount options allows NFS requests to be interrupted if the server becomes unresponsive. This means that system calls like ls, cat, or even umount on the NFS mount can be interrupted with signals like Ctrl+C or SIGTERM.
x-systemd.device-timeout Configure how long systemd should wait for the mount command to finish before giving up on an entry from /etc/fstab. Specify a time in seconds or explicitly append a unit such as “s”, “min”, “h”, “ms”.
Use systemd’s automounting to mount only when accessed.
Example systemd mount unit
[Mount]
What=nfsserver:/export/path
Where=/mnt/nfs
Type=nfs
Options=_netdev,fsc,auto,nofail,x-systemd.device-timeout=10,soft,timeo=5,retrans=3,intr
TimeoutSec=30
Sharing Home folder đźš§
Things to consider ⚠️
- ssh keys will be shared ash well
- to be able to connect to other system own key need to be added to ~/.ssh/authorized_keys
- firefox will refuse to start thinking it’s already started
It is advise to enable different config between the two system (missing application / different version)
- consider using a dotfile manager (like chezmoi or yadm) to manage user-specific configs smartly.
see also
- Is it possible to store user’s home directories remotely?
- how to deal with shared home directory on linux?
see also
- How can I cache NFS shares on a local disk?
- Can FS-Cache be configured to always cache the full file on read?
- “sync” command over NFS - Consider just `sudo mount /nfs-mount -o remount
- How to unmount NFS when server is gone? -
umount -f -l /mnt/myfolder
, and that will fix the problem. - Why is Linux NFS server implemented in the kernel as opposed to userspace?