[UFO Chicago] Network fileshares

Nate Riffe inkblot@movealong.org
Thu, 21 Nov 2002 02:26:13 -0600


Just now Ian Bicking made 15 LEDs in my apartment flash with this:
> What do people use for network fileshares?  I'm getting annoyed with SSH
> and all that junk, and I'd really like to just be able to mount network
> drives over the internet.

I was using SFS until it broke recently.  I spent a day trying to
figure out what broke and eventually gave up.  The relevant Debian
packages are sfs-server and sfs-client.  The whole thing piggybacks
over NFS, with the SFS server talking to a local NFS server as an NFS
client, and likewise the SFS client talking to a local NFS client as
an NFS server.  It does this in a way that allows you to deny NFS
access to all hosts except localhost.  Using SFS involves generating a
key on the server and registering with the SFS auth daemon, and then
having an sfsagent process running on the client.  Actually accessing
remote files is best achieved with symlinks, since all that stuff
magically appears under /sfs once everything is running.

Major problems with SFS:

The current Debian packages appear to be broken.  I don't know what
changed or why it's even failing, but for whatever reason, if you
install sfs-server and then try to use sfskey or install sfs-client
and then try to use sfsagent, they will be unable to connect to some
socket that's supposed to be in /var/lib/sfs.  I gave up and
uninstalled it about a month ago, and then reinstalled about a week
ago to see if somehow it would work.  Nope.

SFS has bad failure modes.  If the nfsmouter process dies for any
reason, you're left with these un-umountable and inaccessible NFS
mounts in /sfs.  In my experience, the nfsmounter process is
reasonably stable, but nonetheless, there's always the possibility.
If the client and server cannot contact each other for some reason
(routing problems, line failure, coffee spills, etc), the NFS mounts
under /sfs because un-umountable and inaccessible.  If you restart the
server while clients are connect (SFS uses TCP connections), the
client can, for reasons not entirely clear, take a considerable amount
of time to reconnect, during which the NFS mounts under /sfs or
un-umountable and inaccessible.  If at any time the NFS mounts under
/sfs become un-umountable or inaccessible, any processes that try to
access stuff under there will block in the D state until access
returns, and in cases where access will not return, that process is
hung until you reboot.  Considering that symlinks are the sensible way
of actually using stuff shared over SFS, "accessing" the SFS stuff can
be as trivial as running ls -lF in a directory containing such a
symlink.

SFS is great when it's working, but I gave up on it.  YMMV.

> There's NFS, which I've never much used -- I never get good feelings
> from people, security and otherwise... can it work well over the
> internet?  How hard is it to install on the server?  I can install
> whatever junk on my (client) workstation, but it's harder on the
> servers.

NFS is designed with the assumption that it will be used over secure
networks.  Anyone who can get a hold of the NFS handles for your
shares and (if you've configured host-based auth) spoof packets from
the client's IP address can do anything on those shares that the
legitimate client can.  Considering that all that traffic is
unencrypted UDP packets, it's really not hard to see why a secure
network is strongly suggested.  Also worth considering is the history
of exploitable buffer overflows in the Linux RPC daemons.

That's not to say that you shouldn't use NFS, but that you shouldn't
use *just* NFS.  If you set up, for instance, a CIPE tunnel between
the client and the server you gain both a secure network and the
freedom to block off your RPC daemons to would-be attackers.  I
haven't really played around with tunnels other than CIPE, which is a
pretty decent point-to-point encrypted tunnel package.

> There's Coda, which I don't know much about... it sounds a lot better,
> but also less mature.  Has anyone tried it?

I took a breif look at Coda.  It was brief because I got to the part
about setting up "Coda partitions" on the server.  I don't know if
it still works that way or if the guys at CMU eventually got their
heads out of their asses.  The features that Coda provides do sound
pretty cool, though.

> [...] DAV (on Apache) [...]
> [...] Gnome VFS, Emacs' Tramp and EFS [...]

Can't help you there, but that DAV stuff sounds interesting.

> There's also the possibility of using CVS as a sort of networked
> filesystem.  Have other people tried this?  I'm still not really
> comfortable with CVS, though I've finally gotten used to the basics.  It
> does offer some other useful features besides networking...

CVS is very manual.  Every modification on a CVS client "mount" would
have to be committed back to the CVS by issuing a 'cvs ci' command,
which may be possible if the client machine were running HURD and the
CVS mount were handled by yet another weird kernel service.  And the
repository would not function as a local filesystem on the server
where it is located, like an NFS share does (or for that matter, the
shares of any networked filesystem that I can think of).

> Any other ideas?  (I'm searching for optimal productivity, one piece of
> the environment at a time.)

Dare I say Samba?  I generally reserve Samba for sharing stuff to and
mounting stuff from actual Windows machines, but you may find it of
use in this instance.

I'm actually also looking for a good encrypted, cryptographically
authenticated, and preferrably compressed network filesystem that
follows UNIX filesystem semantics and fails gracefully in the presence
of network failures or inavailability.

-Nate

-- 
--< ((\))< >----< inkblot@movealong.org >----< http://www.movealong.org/ >--
American currency is neither red, white, nor blue.
pub  1024D/05A058E0 2002-03-07 Nate Riffe (06-Mar-2002) <inkblot@movealong.org>
     Key fingerprint = 0DAC F5CB D182 3165 D757  C466 CD42 12A8 05A0 58E0