Note: This web page was automatically created from a PalmOS "pedit32" memo.

NFSv4 notes



NFSv4 does allow you to designate filehandles as "volatile on migration", in which case the client can attempt to look up the filehandles after migration. This would only work if you actually told the client about the migration (probably using NFSERR_MOVED and FS_LOCATIONS, both added in v4). If the failover was really "transparent" to the client, then the client wouldn't be able to tell there was a migration in the first place, so this wouldn't help. Also, the Linux client implementation of migration isn't finished yet anyway. Also of course there are inherent problems with looking up the filehandles again after migration--if the file that a filehandle refers to has moved since you originally looked up the file, then looking it up again under the path you originally got it from isn't going to help.
NFSv4 does: 1) require an NFSv4 domainname 2) Use constant ports, for easier firewalling 3) Have the ability to automatically negotiate secure data transmission using Kerberos, and two other methods (which are related to each other, and not implemented on Solaris 10 or Fedora Core 3?). It can also use the old reserved port scheme, in which case you're faced with the same ages-old security problems. 4) Solaris 10 has the ability to pick an NFSv4 domainname on its own. I've taken a crack at setting up DCS' autoinstall to do so 5) Not work with typical IP-based failover things like Whackamole, despite the constant ports 6) Support its own form of failover, described above
NFSv4 on linux doc: http://www.citi.umich.edu/projects/nfsv4/linux/ http://www.citi.umich.edu/projects/nfsv4/linux/using-nfsv4.html
> How hard would it be to set up one of the following on FC3 and Solaris > 10? > > LIPKEY (based on SPKM-3) > SPKM-3 Very hard. They don't support it... ;-) > If we just continue using AUTH_UNIX with NFSv4, are we still relying on > reserved ports? Yes.
Solaris 10, if getting I/O errors: Make sure you specify nfsmap ids correctly.... see /etc/default/nfs.
From Trond on nfs@lists.sourceforge.net: > > An NFSv4 client has no use for statd. The NFSv4 protocol has its own > > methods for tracking client and server reboots. > > lockd too, I hope? Yes. Lockd is gone too. The only daemons you need to run on a client in order to get NFSv4 working are - idmapd (for translating names/groups <-> uid/gid) - rpc.gssd (only if you are using RPCSEC_GSS) - portmap On the server you have to be running - idmapd (for translating names/groups <-> uid/gid) - portmap (this requirement will hopefully go away soon) - rpc.svcgssd (only if you are exporting RPCSEC_GSS) - rpc.mountd (not used by clients, only by nfsd itself - feel free to firewall away the ports it sets up) .... and the nfsd processes.
Another from Trond: Servers that don't store anything to disk will usually support reboot recovery, but will offer a more limited recovery model (i.e. one client may find that another has stolen its locks due to races). An server that supports the full NFSv4 state recovery model will usually store some client information on permanent disk (recent Linux servers store a list of all the "clientid"s in /var/lib/nfs/v4recovery). Strictly speaking, though, this is really only necessary in order to resolve some very obscure state recovery corner cases. For instance in one case, a double reboot of the server combined with a network partition may lead to 2 clients believing they have locked the same file unless the server has stored information about which client lost its state during the period between the 2 reboots (see the discussion in RFC3530).


Back to Dan's palm memos