[vox-tech] NFS:mount RPC timed out

Jeff Newmiller jdnewmil at dcn.davis.ca.us
Mon Sep 3 11:13:55 PDT 2007


xiao.liang at cn.alps.com wrote:
> 
> Both hosts.allow and host.deny are empty.

Oh, well.
Google sez busybox mount does not support NFS4.
Perhaps you need to write a patch for busybox or use a
full "mount" executable on your client... or setup
NFS3 on your server....

> *Jeff Newmiller <jdnewmil at dcn.davis.ca.us>*
> 送信者: vox-tech-bounces at lists.lugod.org
> 
> 09/01/2007 11:03 AM
> "lugod's technical discussion forum" <vox-tech at lists.lugod.org> へ
> 返信してください
> 
> 
> 	
> 宛先
> 	"lugod's technical discussion forum" <vox-tech at lists.lugod.org>
> cc
> 	
> 件名
> 	Re: [vox-tech] NFS:mount RPC timed out
> 
> 
> 	
> 
> 
> 
> 
> 
> I am no NFS guru, but I would check hosts.allow on the client.
> 
> xiao.liang at cn.alps.com wrote:
>  >
>  > I use kernel 2.4.20,busybox 1.3.1,the NFS server is V4.When I try to
>  > mount use
>  >
>  > mount -t nfs 10.25.16.130:/opt/Qtopia /opt/Qtopia
>  >
>  > Got error ,mount: RPC time out
>  >
>  > When I use ethreal on both the server and client side , found that the
>  > server have sent SYN,ACK back to client with the TCP connection, but the
>  > client just do not respond. I use portmap and also inetd. The tcpdump
>  > log is look like this:
>  >
>  > 10.25.16.160.794 > 10.25.16.130.sunrpc: S, cksum 0x5740 (correct), 
> 31406388
>  >
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x79e6 (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x79e6 (correct),
>  > 2533961748:2
>  > 60) 10.25.16.160.794 > 10.25.16.130.sunrpc: S, cksum 0x5614 (correct),
>  > 31406388
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x76ff (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x76ff (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x76f8 (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x76f8 (correct),
>  > 2533961748:2
>  >
>  >
>  > 60) 10.25.16.160.794 > 10.25.16.130.sunrpc: S, cksum 0x53bc (correct),
>  > 31406388
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x7123 (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x7123 (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x711c (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x711c (correct),
>  > 2533961748:2
>  >
>  >
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x6563 (correct),
>  > 2533961748:2
>  > 60) 10.25.16.160.794 > 10.25.16.130.sunrpc: S, cksum 0x4f0c (correct),
>  > 31406388
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x6561 (correct),
>  > 2533961748:2
>  > 10.25.16.130.sunrpc > 10.25.16.160.794: S, cksum 0x6563 (correct),
>  > 2533961748:
>  >
>  >
>  > If it is the portmap and inetd's problem?
>  > Following is some info:
>  >
>  > In /etc/service
>  > sunrpc 111/tcp portmap
>  > sunrpc 111/udp portmap
>  >
>  > In /etc/inetd.conf
>  > #rstatd/1-3 dgram rpc/udp wait root /usr/sbin/tcpd rpc.rstatd
>  > #rusersd/2-3 dgram rpc/udp wait root /usr/sbin/tcpd rpc.rusersd
>  >
>  > Client use a random tcp port number to connect with server:111, then who
>  > will look after the SYN,ACK packet sent back by server, the inetd?
>  > Should I add a line in inetd.conf like "794    stream  tcp   nowait root
>  >  portmap" , but it is a ramdom port number?! Or should there be a tcpd
>  > or sth run on my board????
>  >
>  > Ask for help!!!!!
>  >
>  > ps :
>  >
>  > /usr/sbin/inetd
>  > /sbin/portmap

-- 
---------------------------------------------------------------------------
Jeff Newmiller                        The     .....       .....  Go Live...
DCN:<jdnewmil at dcn.davis.ca.us>        Basics: ##.#.       ##.#.  Live Go...
                                       Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/Batteries            O.O#.       #.O#.  with
/Software/Embedded Controllers)               .OO#.       .OO#.  rocks...1k
---------------------------------------------------------------------------


More information about the vox-tech mailing list