免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 1469 | 回复: 0
打印 上一主题 下一主题

Van's NFSv3 mini-HOWTO [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2008-04-18 11:17 |只看该作者 |倒序浏览

               
Van's NFSv3 mini-HOWTO
Introduction
I recently needed to setup NFS between some Red Hat Linux systems.  I had two different types of
NFS connections to setup:  a permanent read-only (RO) directory for copying software, and some
automatically mounted home directories.  Although you can use the redhat-config-nfs GUI
tool, it is good to know what is going on under the hood.  These are my notes on setting up NFS
in both scenarios.
The notes were based on a stock Red Hat 8 NFS server, and a stock Red Hat 9 client.  The NFS
server is named "im" and has IP address 192.168.1.2 with a /24 netmask. The NFS client is named
"tp1" and has IP address 192.168.1.191 with a /24 netmask. In this example, I will force the use
of NFSv3.
Exporting a Read-Only File System Mapped to the Anonymous UID/GID:
This is an example of a directory that you want
available throughout your LAN, but you don't want anyone writing to that directory.
  • Add the following entry to /etc/exports:
    # Van's NFS export file
    /Data/Photos    192.168.1.0/24(ro,all_squash,anonuid=65534,anongid=65534)
    (There should already be a user called "nfsnobody" with UID/GID=65534)
  • Add the following entry to /etc/hosts.deny:
    portmap: ALL
           
  • Add the following entry to /etc/hosts.allow:
    portmap: 192.168.1.0/255.255.255.0
    This prevents hosts from other networks from connecting to the portmapper.
           
  • Now start the portmapper, nfsd, and related daemons:
    # /etc/init.d/portmap start
    Starting portmapper:                                       [  OK  ]
    # /etc/init.d/nfs start
    Starting NFS services:                                     [  OK  ]
    Starting NFS quotas:                                       [  OK  ]
    Starting NFS daemon:                                       [  OK  ]
    Starting NFS mountd:                                       [  OK  ]
    # /etc/init.d/nfslock start
    Starting NFS statd:                                        [  OK  ]
           
  • When you make changes to /etc/exports, you should enter the following command to re-read the
            export table:
    # exportfs -rv
    exportfs: No 'sync' or 'async' option specified for export "192.168.1.0/24:/Data/Photos".
      Assuming default behaviour ('sync').
      NOTE: this default has changed from previous versions
    exporting 192.168.1.0/24:/Data/Photos
           
  • Now, let's make sure everything is working properly:
    # rpcinfo -p
       program vers proto   port
        100000    2   tcp    111  portmapper
        100000    2   udp    111  portmapper
        100011    1   udp    762  rquotad
        100011    2   udp    762  rquotad
        100011    1   tcp    765  rquotad
        100011    2   tcp    765  rquotad
        100003    2   udp   2049  nfs
        100003    3   udp   2049  nfs
        100021    1   udp  32781  nlockmgr
        100021    3   udp  32781  nlockmgr
        100021    4   udp  32781  nlockmgr
        100005    1   udp  32782  mountd
        100005    1   tcp  33396  mountd
        100005    2   udp  32782  mountd
        100005    2   tcp  33396  mountd
        100005    3   udp  32782  mountd
        100005    3   tcp  33396  mountd
        100024    1   udp  32783  status
        100024    1   tcp  33397  status
    # showmount -e
    Export list for im.vanemery.com:
    /Data/Photos 192.168.1.0/24
    # exportfs
    /Data/Photos    192.168.1.0/24
    The netstat -tuap command should also give you a good idea of what TCP and UDP ports are
    listening now.
           
  • Final server setup:  You will want to make sure that the portmapper and all the NFS-related
            daemons start at boot time.  You can do this with the following commands:
    # chkconfig portmap off
    # chkconfig nfs off
    # chkconfig nfslock off
    # chkconfig --level 345 portmap on
    # chkconfig --level 345 nfs on
    # chkconfig --level 345 nfslock on
           
    Also, you need to make sure that you don't have a Linux firewall like iptables automatically
            blocking the connections.
      
           
    Mounting the Read-Only NFS filesystem on the NFS Client:
    Prerequisites:  The client must be running the portmapper service and the rpc.statd service.  If you need
            file locking, you must also be running the NFS lock daemon.  You do not need to be running
            rquotad, nfsd, or mountd. By running /etc/init.d/portmap and /etc/init.d/nfslock, I had everything
            ready for the NFS mount.  After mounting the NFS partition, running rpcinfo -p on the client showed me
            that the "status" RPC service had been started automatically.  This is called "rpc.statd" in the
            ps listings. You will probably want to secure the portmapper and other RPC services
            with /etc/hosts.allow and /etc/hosts.deny, just like you did on the server.
  • On the client, make sure you do not have a firewall like iptables enabled, or it will
            prevent you from making the NFS connection.  Now, let's check RPC and UDP connectivity to the
            NFS server:
    [tp1]# ping im
    [tp1]# showmount -e im
    Export list for im:
    /Data/Photos 192.168.1.0/24
    [tp1]# rpcinfo -p im
    [tp1]# tracepath im/2049
    1?: [LOCALHOST]     pmtu 1500
    1:  im (192.168.1.2)                                       0.371ms reached
         Resume: pmtu 1500 hops 1 back 1
           
    Based on the output of these commands, you should be able to see if the client will be able
            to make an NFS connection to the server or not.
           
  • Make a mountpoint for the directory:
    [TP1]# mkdir /mnt/Photos
           
  • Mount the NFS server manually:
           
    [tp1]# mount -t nfs -o hard,intr,ro,rsize=2048,wsize=2048,nfsvers=3 im:/Data/Photos /mnt/Photos
    [tp1]# mount
    im:/Data/Photos on /mnt/Photos type nfs (ro,hard,intr,rsize=2048,wsize=2048,addr=192.168.1.2)
           

  • Test from root and non-root accounts on tp1 to see if directory and file read operations work.  Commands like
            df, du, ls, cd, and cp should work just fine.
  • Now, let's look at stats back on the server:
    # showmount -a
    All mount points on im.vanemery.com:
    tp1:/Data/Photos
    # nfsstat
    # nfsstat -o net
    Warning: /proc/net/rpc/nfs: No such file or directory
    Server packet stats:
    packets    udp        tcp        tcpconn
    353441     353441     0          0
    Client packet stats:
    packets    udp        tcp        tcpconn
    0          0          0          0
    # ifconfig eth0
    eth0      Link encap:Ethernet  HWaddr 00:E0:29:42:F8:C2
              inet addr:192.168.1.2  Bcast:192.168.1.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:391457 errors:0 dropped:0 overruns:0 frame:37
              TX packets:698615 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:100
              RX bytes:89280722 (85.1 Mb)  TX bytes:846404790 (807.1 Mb)
              Interrupt:10 Base address:0xb400
    # netstat -s
    # cat /proc/net/snmp
           
  • Back on the client, let's make sure that no process or user is accessing files via NFS, including
            having a shell open in the NFS directory. Then we will unmount the NFS directory like this:  
            [tp1]# umount /mnt/Photos
  • Let us setup our /etc/fstab file on the client so that the NFS mount happens automatically on
            boot:
    im:/Data/Photos  /mnt/Photos  nfs  ro,hard,intr,rsize=2048,wsize=2048,nfsvers=3,bg 0 0
    You can test this with the mount -av command without rebooting the server.
           
    Final Comments on Read-Only NFS
    I selected the rsize=2048 and wsize=2048 option for this NFS mount based on some testing I did with
    copying 90MB worth of files over the network to the client with different rsize settings.  I tried a
    number of settings, and 2048 was the 2nd fastest next to 8192.  8192 produced UDP datagrams fragmented
    into 6 IP packets, 5 of which were max size at 1514 bytes on the wire.  I don't want that many max sized
    packets or fragments on my LAN.  2048 gave reasonable performance and only produced 2 IP fragments
    per UDP datagram.  This leaves more space on the LAN for other traffic, especially if you are using a
    repeater instead of a switch.
    Using the Automounter to Mount Home Directories over the Network:
    What if you would like users on a remote host to be able to mount their home directories from a
    central server?  With conventional NFS mounts, only root can mount and unmount NFS directories.  
    With the automount utility (a.k.a. autofs), NFS directories can be mounted and unmounted automatically
    as needed by regular users.
    This section will show you how to set up the automounter on an NFS client so that machine tp1's
    users can login to their home directories on a central NFS server.  This has several advantages:
    • user files are centrally stored and managed
    • user files can easily be backed up
    • the server is more likely to be running hardware RAID and to be physically secured

    In this scenaro, host "im" is the NFS server, host "tp1" is the client. User "gishj" will have his
    home directory located on the NFS server.  Also, the partition /dev/hdb2 is mounted to /ahome  
    as ext3.  This partition has user quota support enabled.
    Note:  NFS assumes that users have the same UID and GID on the client machine as they
    do on the server machine.  The UIDs, GIDs, and usernames can be synchronized via several mechanisms, which
    are outside the scope of this mini-HOWTO.  Here is a short list of possibilities:
    • NIS
    • NIS+
    • LDAP
    • /etc file replication via various mechanisms
    • manual user/group creation and synchronization on each host

  • On the server, identify a directory for NFS-mounted user homes.  Create the directory
            and partition as needed.  Each user's home directory needs to be "chown"ed and "chmod"ed appropriately.
            For this example, I will create a new user and group for "gishj" on the server and the client with the same
            UID and GID.
    [root@im /]# mkdir /ahome
    [root@im /]# useradd -d /ahome/gishj -u 600 gishj
           
    Note that the /etc/skel files were put here by the useradd utility, allowing you
            to create a standard user environment for every user on the server.
           
           
  • Now, let's export the filesystem via NFS with appropriate options.  Add this line to
            /etc/exports:
    /ahome          192.168.1.0/24(rw,sync,root_squash)
            Activate the NFS export and verify it with these commands:
    # exportfs -rv
    exporting 192.168.1.0/24:/ahome
    # showmount -e
    Export list for im.vanemery.com:
    /ahome       192.168.1.0/24
    # exportfs -v
    /ahome          192.168.1.0/24(rw,wdelay,root_squash)
           
  • On the client, let's make sure we can see the NFS export on the server:
    [tp1]# showmount -e im
    Export list for im:
    /ahome 192.168.1.0/24
           
  • On the client, let's setup the automount map by adding this to /etc/auto.master:
    /autohome       /etc/auto.autohome      --timeout=120
    Create a new file called /etc/auto.autohome and add these lines:
    # This is for mounting user homes over NFS
    # Format = key [-mount-options-separated-by-comma] location
    *       -fstype=nfs,rw,hard,intr,rsize=2048,wsize=2048,nosuid,nfsvers=3 im:/ahome/&
           
    The wildcards "*" and "&" allow usernames to be inserted as NFS paths.  "rw" allows read and write
    operations, and "nosuid" is a security option.  If you want to read more about the allowable wildcards
    in the autofs maps, try man 5 autofs.
           
  • Now, let's create the mountpoint and activate the automounter, and set it up to start
            automatically on boot:
    [tp1]# mkdir /autohome
    [tp1]# /etc/init.d/autofs start
    Starting automount:                                        [  OK  ]
    [tp1]# chkconfig autofs off
    [tp1]# chkconfig --level 345 autofs on
           
           
  • We need to make an account and add a password for user "gishj".  Remember, the UID/GID needs to
            be the same on the client as on the server, and we will be configuring Joe Gish's home directory to
            be at /ahome/gishj.
    [root@tp1 /]# useradd -M -d /autohome/gishj -u 600 gishj
    [root@tp1 /]# passwd gishj
    Changing password for user gishj.
    New password: ********
    Retype new password: ******
    passwd: all authentication tokens updated successfully.
           
  • Now, login on tp1 as "gishj".  You should be able to login.  If you issue a pwd command, you will
            see that you are in the /autohome/gishj directory, and you will be able to read and write
            files with no problem.  If you use the "mount" command, you will see this:
    automount(pid1510) on /autohome type autofs (rw,fd=5,pgrp=1510,minproto=2,maxproto=3)
    im:/ahome/gishj on /autohome/gishj type nfs (rw,nosuid,hard,intr,rsize=2048,wsize=2048,nfsvers=3,addr=192.168.1.2)
            After Joe Gish logs out, 2 minutes later the auto-mounter will unmount the NFS directory.  When you use the
            mount command on tp1 again (as root), here is what you will see:
    automount(pid1510) on /autohome type autofs (rw,fd=5,pgrp=1510,minproto=2,maxproto=3)
           
    Final Comments on Automounting NFS directories:
    The automount maps can be distributed via NIS, NIS+, LDAP, or other means.  Back on the NFS
    server, you can watch the automount operation with watch showmount -a.  Note that the user on
    the NFS clients does not need to be able to login to the NFS server.  If you don't run passwd for
    the user on the server, they will not be able to login, but they can still use their home directory over
    the network.
    Limiting the disk space for each user on the NFS server with quotas:
    If you are using ext2/ext3 or ReiserFS for your automount partition on the server, then you can setup
    quotas for each user.  This limits how much disk space each user can have.  This may also be possible
    in some kernels with JFS and XFS, but I have not looked into this.  When quotas are enabled, the
    user on the NFS client can still view his or her quota by typing the quota command.  I tested the
    quota function by logging in as one of the users and copying lots of files to my home directory.  
    As expected, when I exceeded my quota, further copy operations failed with an error.  Removing some files
    fixed my quota problem, and I could write to the NFS directory again.
    NFSv4
    This mini-HOWTO has focused on NFSv3 for Linux.  The NFSv2 and NFSv3 implementations for Linux are
    fairly mature now, with the exception of TCP support (client and server) for NFSv3, which will be
    incorporated in future kernels.  NFSv4 is being actively developed for Solaris and Linux.
    NFSv4 will become an Internet standard for filesharing over a network.  It has some key improvements
    over NFSv3:
    • Will work over TCP with portmapper and mount functions integrated.  This means a client can
              connect to a server with a single TCP connection
    • Works with/through NAT firewalls
    • Includes MANDATORY encryption and security support (Kerberos 5 and LIPKEY)
    • Support for ACLs

    NFSv4 will be part of the standard 2.6 Linux kernels.
    http://www.vanemery.com/Linux/NFS-Van.html
                   
                   
                   
                   
                   

    本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/31550/showart_548158.html
  • 您需要登录后才可以回帖 登录 | 注册

    本版积分规则 发表回复

      

    北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
    未成年举报专区
    中国互联网协会会员  联系我们:huangweiwei@itpub.net
    感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

    清除 Cookies - ChinaUnix - Archiver - WAP - TOP