Linux file-sharing in a home wifi network

The scenario: the home network is centered around a Linux server. This acts as (amongst a number of other things) a large data repository. All our media files, photos, music and so on are stored on it. Apart from the convenience of having it all centrally located, it also provides data security: all critical data is archived hourly using rsnapshot, such that there is always a backup from at least one month ago in the event of data being e.g. accidentally deleted. It uses a single 1TB disk as the main data store, with a second 1TB disk for the snapshots. Then in addition to that, really really critical data (the irreplaceable stuff) is archived every night to an off-site location. Anyway, in recent times my children have discovered the pleasures of photography… Vast quantities of pictures to be put on a PC and secured. To date it’s gone like this:

  • Kids use a single laptop, running Linux.
  • Each has an account on the laptop.
  • Plug camera in to laptop and pull the pictures on to the laptop.
  • In background, cron archives them off to the server using rsync over ssh.

As far as the kids themselves are concerned, there’s (a) a single laptop and (b) it has all their photos on it and (c) papa has assured them that if something terrible happened to the laptop, the pictures can be restored from the server.

Thus far, fine.

The network expands

Time to change… Precipitated by an additional laptop, things get kinda complicated. I want the laptops to be “floating”, and used by either child. No “the HP is mine, the IBM is his”. However that then makes it tricky: with only the single laptop it is the primary (since only) data store for their photos. Backups aside, it’s straightforward. So I need to shift the primary data stores off the laptops themselves and having them full-time on the server, and accessed over the network. Which is fine, except that performance it going to be an issue: these are laptops, and they are connected to the home network using wifi, so network file systems are potentially a problem (you ever tried regularly scanning several thousand photos over a wifi connection…? …it’s not what you want to do regularly!)

So we’re going to need network file systems with some sort of magical optimisation…

The solution

We’re going to create a solution with several key elements:

  • The server is going to have data stores for each laptop user, shared out on the network using NFSv4.
  • The clients (the laptops) are going to use a caching file system on top of the NFS shares, to attempt to provide less load on the wifi connections.
  • The clients are going to need to auto-mount the correct data stores depending upon which user is using them.

A quick word on file system caching here: books can (and have!) been written on such subjects… Suffice to say that it’s easy to fall in to the trap of thinking that caching is always a good idea, and simply must improve performance. Not at all. Any file system cache can provide improved OR degraded performance depending upon how it is used (e.g. lots of small files, or large files, regularly accessed, infrequently accessed, underlying file system type,…. the list is long and has multiple permutations)

Suffice to say that here the “performance improvement” we are after is load reduction on the wifi network. I am not going to go in to the whys and wherefores, but here will blindly assume that a caching layer between the client and the NFS shares is a good idea in the circumstances. Your mileage may vary – and BTW, it’s rather fun to test and compare. Try it!

The Elements

  • The server is a Linux server running Ubuntu Server 11.04. (None of this configuration is going to be too highly Ubuntu, or even Debian-derived, specific, so any Linux will do)
  • NFS Server on the, er, server.
  • Ubuntu laptop clients, again 11.04, but also fairly generically applicable.
  • Clients to have NFS v4 client code, Cache-fs and automouting capability.

Setting up the Server side

We’ve 3 users to be catered for in this exercise. For the rest of this article I’m going to call them A, B and C. The server itself goes by the name bobby.

On bobby, if not there already, I need to install the following packages:

  • nfs-common
  • nfs-kernel-server
  • portmap

The locations, on the server, of my data stores are going to be:

  • /data/A
  • /data/B
  • /data/C

First step is to create binding within the exports directory. Edit /etc/fstab to look like this:

# NFS bindings to /export
/data/A              /export/A        none    bind    0 0
/data/B              /export/B        none    bind    0 0
/data/C              /export/C        none    bind    0 0

Make sure that the directories /export/A, B & C are created, then a

mount -a

should bind the real locations to the export locations. Check with a

mount

which should be display something like this:

/mnt/DATA1/data/A on /export/A type none (rw,bind)
/mnt/DATA1/data/B on /export/B type none (rw,bind)
/mnt/DATA1/data/C on /export/C type none (rw,bind)

and shows that the bindings are there. This also actually illustrates another point: in actual fact, the ultimate location of the data stores are /mnt/DATA1/data/A, …B etc. These are then symlinked to /data/A, …B etc. for convenience. One can refer to the symlinked location on fstab no problem. However the mount command dereferences that and shows the final location as here. That’s all fine.

Now a word about NFS server configuration….! It’s a potential minefield. If you need to, start off over here: Ubuntu Guide to NFS

But you may be better off trying this ultra-simplified version I present, before thinking about tweaking stuff.

For now, all I do is edit /etc/exports to contain:

/export	 192.168.0.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/A    192.168.0.0/24(rw,nohide,insecure,no_subtree_check,async)
/export/B    192.168.0.0/24(rw,nohide,insecure,no_subtree_check,async)
/export/C    192.168.0.0/24(rw,nohide,insecure,no_subtree_check,async)

The only parts of that I’ll go in to are:

  • The /export locations are those that you bind to in the fstab declarations.
  • The ip range (note it’s a range, not a single address) covers the location of my clients (i.e. the laptops on the home network all have addresses of the form 192.168.0.XXX)
  • That first line is required!
  • rw = read/write access (probably what you want)

and then fire up the NFS server with:

/etc/init.d/nfs-kernel-server start

All being well, we’re then pretty much done on the server side.

A word on user IDs…

NFS is….. kinda quirky. Some of those quirks relate to how remote clients are recognised (authenticated) by the server. There are a multitude of ways this can happens, all optional and all somewhat different. Many are also rather complex… Want some advice? OK: since the server and the clients are all under “your” control, make it easy. And “easy” here means as follows: ensure that users’ UIDs are the same on the server and all the clients.

Put practically, here’s what I’m talking about: on the server, do:
cat /etc/passwd
to produce something reminiscent of this (lightly obfuscated)

.
.
.
A:x:1001:100:person named A:/home/A_directory:/bin/sh
B:x:1002:100:person named B:/home/B_directory:/bin/sh
C:x:1003:100::/home/c_directory:/bin/sh

It’s the numbers you care about: for example, User A has UID=1001 and GID=100.

Now check the same file on either future client laptop:

.
.
.
C:x:1003:1000:c,,,:/home/ccc:/bin/bash
A:x:1001:1001:a:/home/aaa:/bin/bash
B:x:1002:1002:b:/home/bbb:/bin/bash

Note that we’ve used the same UIDs for a given user.

If you’ve a lot of users, or an existing setup which you cannot easily change, then pick (and learn about!) one of the many NFS schemes for dealing with this. But if you’ve a small, and/or changeable set up, do yourself a big favour and go with matched UIDs…!

The Clients – Basic NFS

This is where things get fun. Before we dive in to caching, automounting and so on, let’s make sure that basic NFS works OK for us. On a laptop, edit /etc/fstab to be like this:

.
.
.
# NFS
bobby:/C /home/cc/bobby    nfs4     rw,hard,intr    0 0
bobby:/A /home/aa/bobby    nfs4     rw,hard,intr    0 0
bobby:/B /home/bb/bobby    nfs4     rw,hard,intr    0 0

Note:

  • bobby is the server name. You could use the raw IP address here, or if DNS is working (or you’ve a static entry ion /etc/hosts) you can use the name.
  • Note the server location is only given as “/A”, not “/export/A”. This is a difference between NFS3 and NFS4.
  • the mount point on the client is entirely arbitrary, but it makes sense for it to be “under” the particular user’s home directory, as here.

With that done, do a

mount -a

and check that all the three mounts work. Apart from checking via a mount command’s output, also login as each user, go to that user’s NFS-mounted directory, and create a file, edit it, and delete it. Does that all work OK? If so, grand. If not, STOP and get the basic NFS setup working before proceeding! This is really important… Debug basic NFS issues first.

The Clients – Caching

The next layer we are going to introduce is client-side caching.

Again, I’m going to present a highly simplified (and highly effective!) setup, glossing over a vast array of optional complexity and trouble… (If you fancy it, here’s a good starting point for the details: Red Hat Guide to FS-Cache

We’re going to use FS-Cache. Install it using:

apt-get install cachefilesd

All you really need to then change is to edit /etc/default/cachefilesd and set:

RUN=yes

Then start it using:

/etc/init.d/cachefilesd start

Note that you may need to add the mount-attribute user_xattr on the file system containing the cache files (which will typically be the root file system or, if you have /var broken out on a separate partition, then that one) So your /etc/fstab entry might look like this:

UUID=5f63b76a-d367-49e4-a540-d7ab77b891fe /               ext4    errors=remount-ro,user_xattr 0       1

Go back to your /etc/fstab and edit each NFS line to include the option fsc, like this:

.
.
.
# NFS
bobby:/C /home/cc/bobby    nfs4     rw,fsc,hard,intr    0 0
bobby:/A /home/aa/bobby    nfs4     rw,fsc,hard,intr    0 0
bobby:/B /home/bb/bobby    nfs4     rw,fsc,hard,intr    0 0

Then (re)mount the NFS shares and see if it works!

Here’s a good point, if you wish, to test one mount with caching and one without, and run some comparisons in your environment and mode of using it, to see how much (if at all!) caching helps… If you alternatively wish to skip that and just check that “caching is doing doing something so I know it works in some manner or other” then you can just

cat /proc/fs/fscache/stats

and check for signs of any life. See some? Great. Plough on.

The Clients – Automounting

Why not just leave things as they are? Each laptop, when powered up, mounts all the users’ NFS shares? Well, you could. But it’s not ideal.

  • Bandwidth and time. Mounting all, when likely only one will be used takes time and bandwidth.
  • Concurrency. In theory, and hopefully practice, NFS will handle this. But it’s just so much neater and less contentious (joke intended…) to only mount the share where and when it’s being used. Then it gets dropped when it’s not used.

So unmount the NFS shares in place, and then edit /etc/fstab to comment them out there too. From here on the shares will not be mounted from fstab, but by the automounter.

We need to install automounter first, so do that using:

apt-get install autofs

The autofs documentation, and many of the online resources, are fairly confusing for a newcomer! The automounter is a very flexible piece of software, and has to handle many different situations – hence the complexity. But we can keep it simple…

  • First, edit /etc/default/autofs
  • Add/uncomment
MOUNT_NFS_DEFAULT_PROTOCOL=4
  • In theory, you can leave this and specify we’re using NFSv4 on the mount options – however I had a lot of trouble with this, and given that we’re in a simple all-v4 environment, it’s a lot simpler to just change it here.
  • You might want to think of tuning the timeouts here, but don’t feel obliged. Leave everything else as-is.
  • Edit /etc/auto.master
  • Comment out the last line, and add a new one, like this:
.
.
.
# Commented out next line:
#+auto.master
# My new nfs automount details:
/-	/etc/auto.nfs
  • So you comment out the +auto.master and add in the reference to /etc/auto.nfs
  • Now create /etc/auto.nfs and make it similar to this:
/home/aa/bobby	-rw,fsc,hard,intr	bobby:/export/A
/home/bb/bobby	-rw,fsc,hard,intr	bobby:/export/B
/home/cc/bobby	-rw,fsc,hard,intr	bobby:/export/C
  • Note the /home/aa/bobby line is arbitrary – it is the local mount point and can have any name.
  • Note also the fsc parameter, to ensure we use the caching filesystem layer as previously setup.

Conclusion

And that’s about it. With such a configuration, my kids can logon to either laptop, open “their” folder called bobby/ and, hey presto, they have full access to their data on the server, all invisibly assisted by a caching layer.

It’s a little fiddly to set up, but not so confusing if you remember how it breaks down:

  • Set up NFS on the server and share out the server locations.
  • Set up NFS on the clients and ensure simple mounting works.
  • Set up FS-Cache on the clients and check that NFS uses it OK via normal mounting.
  • Finally set up automounter on the client to have NFS automagically only mount as and when required.

Addendum 1: Shutdown hangs…

With everything up and running nicely, I noticed one frequent problem with the client machines: they would no longer shutdown (or thus reboot) properly. The shutdown started but usually “hung” before completion. I’m pretty certain it’s a “shutdown job ordering” type of issue, whereby NFS unmounts, turning of caching and killing the wifi are either in the wrong order or maybe just too close in time.. So one may need to tweak the shutdown/reboot kill tasks. Which these days is more of a headache than it used to be “in the good old days” since we have to consider tasks run via SysV rcN.d/ jobs and “upstart” jobs as well…

When I’ve tuned them right, I’ll try and remember to update here. But if you see the shutdown/reboot hanging issue, the solution is in that whole area. Strictly speaking, it’s a Ubuntu distro bug.

 

EDIT: 17 May. The hang on shutdown/reboot issue is surprisingly difficult to resolve! Looking in to it it seems that autofs and wifi have a long history of not getting along – Ubuntu has a heap of bug reports concerning this issue, over quite a period of time. Yet they seem to not get resolved. For my part, for now I’ve actually stopped using the auto-mounting feature, as it’s just a “nice to have” and far and away the least critical aspect of the above setup. A shame, but not a big shame.

Comments are closed.