For the past couple of years, I’ve used SSHFS to access my fileserver remotely (mostly from work). It’s always been pretty slow and it isn’t very stable on Solaris, so I’ve switched to NFSv4 over SSH. My biggest hangup of using NFS was how to secure it over the internet. Its Kerberos support is completely overkill for my needs and I never really wanted to deal with the complications of scripting the set up of an SSH tunnel, either. It all seemed so fragile.
Then I discovered autossh which does all the work of setting up and maintaining the tunnel for me. I coupled that with an executable autofs map to automatically start the tunnel just before trying to mount a share, like:
if [ -f $AUTOSSH_PIDFILE ]; then
kill -HUP $(cat $AUTOSSH_PIDFILE)
autossh -f -M 0 -o ServerAliveInterval=5 -NL 2050:localhost:2049 jlee@falcon
echo "-fstype=nfs4,port=2050 localhost:/nest/$1"
Using an executable autofs map allows me to avoid reconciling the differences between service managers like SMF and Upstart, offering a consistent way to start the tunnel exactly when it’s needed on both Solaris and Linux. When you ‘cd’ into a directory managed by autofs, autossh is started or woken up, then the share is mounted over the tunnel. If there is a network interruption or change (from wired to wireless, for example), ssh will disconnect after 15 seconds of inactivity and autossh will restart it. NFS is smart enough to resume its operation when the tunnel is reestablished.
autossh has built-in support for heartbeat monitoring, but I’ve found SSH’s built-in ServerAliveInterval feature to be more reliable.
With this setup I have very simple, robust, and secure remote access to my fileserver.
Writing this here so I don’t forget:
Another option is to use Upstart. Create a file /etc/init/falcon-tunnel.conf containing:
stop on stopped autofs
exec autossh -M 0 -o ServerAliveInterval=5 -NL 2050:localhost:2049 falcon
Then define a direct map in /etc/auto.direct:
There’s something elegant about the simplicity of this solution.