smbfs vs cifs

We have a linux server connected to a Windows 2003 box via gigabit and we were using samba to share files between the two. For whatever reason it would only get about 26Mb/s pulling via samba. Just as a test I mounted the remote file system as cifs (mount -t cifs) instead of smbfs and all the sudden we were able to get about 140Mb/s. That still begs the question why it maxes out at 140Mb/s instead of the full 1000Mb/s, but it's a heck of a lot better. Even an ftp transfer tops out at 160Mb/s. I'm assuming it's a limitation of Windows?
Leave A Reply - 2 Replies
Replies
b 2006-05-19 01:06pm - No Email - Logged IP: 68.188.121.32

Are you sure you're not getting your MBs and Mbs confused? 140-160 MBs (megaBytes) is probably near maxing out your storage system's bus capacity or drive capacity (are you using u160 scsi?). If it really is 140-160 Mbs (megaBits, remember there are 8 bits per byte) that you're getting, it is fishy. You'll never reach 1000 due to TCP and protocol overhead (which increases with transfer speed), but reaching 80% if your CPU/hardware is up to the task is feasible.

Scott Baker 2006-05-19 01:14pm - No Email - Logged IP: 65.182.224.60

I should have noted in the post that if I use to Linux (on the same machines) and do an FTP transfer from one to the other I can get about ~900Mb/s. That's if I write the output file to /dev/null otherwise the transfer gets bogged down waiting for the disk to write those bytes. I'm more worried about just theoretical performance that I can't even get.

All content licensed under the Creative Commons License