Discussion:
[Samba] Samba ignoring socket options?
Mike Myers
2008-08-29 00:53:31 UTC
Permalink
Hi everyone. I am running Samba 3.2.0-22.1 (as packaged by OpenSUSE in
11.0) on a storage server connected to multiple windows based clients
over a gigabit ethernet link. The server is a quad core Intel CPU and
is equipped with an Intel e1000 based gigabit ethernet controller and plugged into a common gigabit ethernet switch with the windows clients.

I am seeing performance issues on transfers over the gigabit ethernet network,
and was trying to play with the socket options settings in the smb.conf
file to improve transfers rates, but no matter what I set the SO_RCVBUF
and SO_SNDBUF values too, the transfer rates are unchanged, even if I
set the buffer sizes down to 512, which should have the effect of at
least slowing things down dramatically, leading me to question if Samba
is actually using these settings at all. TCP_NODELAY is set, but it
doesn't seem to matter much if I include it or not on the socket
options line, and the line is definitely not commented out, as if I
misspell something on that line, samba terminates with an error when I
try and restart the daemon.

Samba is getting roughly 30 MB/s tranfer rates from the linux server to a windows vista and a windows XP client,
and the disks on both windows machines are RAID0 (4 and 2 disk RAID0
sets respectively), so I don't think I am running into filesystem
performance issues on the target. Moving from the windows systems to Samba, I see about 45 MB/sec transfers rates.

The
raid array on the samba server consist of 2 6 disk raid5 sets with fast
disks on them, running lvm and XFS for a filesysteem. I can do a dd of
a multigigabyte file to /dev/null and get roughly 500-600 MB/'s
transfer rates through the filesystem, so I don't think the raid array
and file system is a bottleneck.

I have run netperf tests
between the server and the clients to see if I had some network
plumbing problems. With default socket settings for netperf (8182
buffer size), I get about 300 mbps transfer rates between the clients
and the server (which matches approximately the 30 MB/s transfer
rates). With 65536 byte buffers, that number goes to 970 or so Mbps,
so I think the interface cards, TCP stack, switches are all ok.

If I do an FTP from the server to a client, I get 45-50 MB/s transfer rates, so I think the problem is somewhere in Samba.

Again,
if I change the SO_SNDBUF and SO_RCVBUF values, either up or down, or
keep them unset, I get almost no variance in transfer rates. Vista is
set to use autotuning in it's TCP configuration, and window scaling and
RFC 1323 options are enabled, but I see the same exact performance on
XP as well, so I don't think it's a client issue. I get somewhat
faster copying from a windows 2003 server on the same LAN to the same
clients, even though it is not equipped with a raid array and is just
reading from one disk with an unoptimized NTFS filesystem.

Is there a bug here or am I missing something in the configuration?

thanks,
Mike
Jeremy Allison
2008-08-29 02:03:16 UTC
Permalink
Post by Mike Myers
Hi everyone. I am running Samba 3.2.0-22.1 (as packaged by OpenSUSE in
11.0) on a storage server connected to multiple windows based clients
over a gigabit ethernet link. The server is a quad core Intel CPU and
is equipped with an Intel e1000 based gigabit ethernet controller and plugged into a common gigabit ethernet switch with the windows clients.
I am seeing performance issues on transfers over the gigabit ethernet network,
and was trying to play with the socket options settings in the smb.conf
file to improve transfers rates, but no matter what I set the SO_RCVBUF
and SO_SNDBUF values too, the transfer rates are unchanged, even if I
set the buffer sizes down to 512, which should have the effect of at
least slowing things down dramatically, leading me to question if Samba
is actually using these settings at all. TCP_NODELAY is set, but it
doesn't seem to matter much if I include it or not on the socket
options line, and the line is definitely not commented out, as if I
misspell something on that line, samba terminates with an error when I
try and restart the daemon.
Samba is definately setting these options.
Post by Mike Myers
Samba is getting roughly 30 MB/s tranfer rates from the linux server to a windows vista and a windows XP client,
and the disks on both windows machines are RAID0 (4 and 2 disk RAID0
sets respectively), so I don't think I am running into filesystem
performance issues on the target. Moving from the windows systems to Samba, I see about 45 MB/sec transfers rates.
The
raid array on the samba server consist of 2 6 disk raid5 sets with fast
disks on them, running lvm and XFS for a filesysteem. I can do a dd of
a multigigabyte file to /dev/null and get roughly 500-600 MB/'s
transfer rates through the filesystem, so I don't think the raid array
and file system is a bottleneck.
I have run netperf tests
between the server and the clients to see if I had some network
plumbing problems. With default socket settings for netperf (8182
buffer size), I get about 300 mbps transfer rates between the clients
and the server (which matches approximately the 30 MB/s transfer
rates). With 65536 byte buffers, that number goes to 970 or so Mbps,
so I think the interface cards, TCP stack, switches are all ok.
Can you try using smbclient to do a large file transfer from
another client Linux box and time that please ? That eliminates
the Windows clients from the equation, and allows us to test
only with things we can examine directly.

Jeremy.
Steve Thompson
2008-08-29 02:32:13 UTC
Permalink
Post by Jeremy Allison
Can you try using smbclient to do a large file transfer from
another client Linux box and time that please ?
Minor rant. One thing that slightly bugs me about smbclient is that it
reports the transfer rate as "kb/s", which means nothing to me. Is this
"KB/s" or "Kb/s"? Well, it's the former: kilobytes per second. So
shouldn't it say "KB/s"?

BTW, I get 43 MB/s with a single 12 MB file on GbE without any socket
options; linux -> linux.

Steve
Jeremy Allison
2008-08-29 02:54:13 UTC
Permalink
Post by Steve Thompson
Post by Jeremy Allison
Can you try using smbclient to do a large file transfer from
another client Linux box and time that please ?
Minor rant. One thing that slightly bugs me about smbclient is that it
reports the transfer rate as "kb/s", which means nothing to me. Is this
"KB/s" or "Kb/s"? Well, it's the former: kilobytes per second. So
shouldn't it say "KB/s"?
Easily fixed. I'll probably do that.
Post by Steve Thompson
BTW, I get 43 MB/s with a single 12 MB file on GbE without any socket
options; linux -> linux.
Ok, that's the same as the Windows systems right ? Should be
higher than that. Ok, at least we've removed the black box
from the system - everything can be examined in open source
code now. You should be able to get 100MB/sec (or close to
it) I think.

You might need to try looking into tbench/dbench to examine
where the bottleneck is :

http://samba.org/ftp/tridge/dbench/README

Jeremy.
simo
2008-08-29 03:03:19 UTC
Permalink
Post by Jeremy Allison
Post by Steve Thompson
Post by Jeremy Allison
Can you try using smbclient to do a large file transfer from
another client Linux box and time that please ?
Minor rant. One thing that slightly bugs me about smbclient is that it
reports the transfer rate as "kb/s", which means nothing to me. Is this
"KB/s" or "Kb/s"? Well, it's the former: kilobytes per second. So
shouldn't it say "KB/s"?
Easily fixed. I'll probably do that.
If we want to be standards compliant then we should write KiB/s[1] not
Kb/s and MiB/s[2] and GiB/s[3]

:-D

Simo.


[1] http://en.wikipedia.org/wiki/Kibibyte
[2] http://en.wikipedia.org/wiki/Mebibyte
[3] http://en.wikipedia.org/wiki/Gibibyte
--
Simo Sorce
Samba Team GPL Compliance Officer <***@samba.org>
Senior Software Engineer at Red Hat Inc. <***@redhat.com>
Jeremy Allison
2008-08-29 03:04:43 UTC
Permalink
Post by simo
Post by Jeremy Allison
Post by Steve Thompson
Post by Jeremy Allison
Can you try using smbclient to do a large file transfer from
another client Linux box and time that please ?
Minor rant. One thing that slightly bugs me about smbclient is that it
reports the transfer rate as "kb/s", which means nothing to me. Is this
"KB/s" or "Kb/s"? Well, it's the former: kilobytes per second. So
shouldn't it say "KB/s"?
Easily fixed. I'll probably do that.
If we want to be standards compliant then we should write KiB/s[1] not
Kb/s and MiB/s[2] and GiB/s[3]
:-D
Yuk. I was thinking more this...
-------------- next part --------------
diff --git a/source/client/client.c b/source/client/client.c
index 1c0dff9..85f653e 100644
--- a/source/client/client.c
+++ b/source/client/client.c
@@ -1080,7 +1080,7 @@ static int do_get(const char *rname, const char *lname_in, bool reget)
get_total_time_ms += this_time;
get_total_size += nread;

- DEBUG(1,("(%3.1f kb/s) (average %3.1f kb/s)\n",
+ DEBUG(1,("(%3.1f KiloBytes/sec) (average %3.1f KiloBytes/sec)\n",
nread / (1.024*this_time + 1.0e-4),
get_total_size / (1.024*get_total_time_ms)));
}
simo
2008-08-29 03:08:29 UTC
Permalink
Post by Jeremy Allison
Yuk. I was thinking more this...
- DEBUG(1,("(%3.1f kb/s) (average %3.1f kb/s)\n",
+ DEBUG(1,("(%3.1f KiloBytes/sec) (average %3.1f
KiloBytes/sec)\n",
Fine by me, I was just pointing out the standard to be pedant :-)

Simo.
--
Simo Sorce
Samba Team GPL Compliance Officer <***@samba.org>
Senior Software Engineer at Red Hat Inc. <***@redhat.com>
Steve Thompson
2008-08-29 04:20:55 UTC
Permalink
Post by Jeremy Allison
Post by Steve Thompson
BTW, I get 43 MB/s with a single 12 MB file on GbE without any socket
options; linux -> linux.
Ok, that's the same as the Windows systems right ? Should be
higher than that. Ok, at least we've removed the black box
from the system - everything can be examined in open source
code now. You should be able to get 100MB/sec (or close to
it) I think.
*sound of stick hitting head*

My destination filesystem was an NFS-mounted volume. Copying instead to a
local software RAID-1 volume, I get 63 MB/sec, which I think is quite
reasonable. The systems were quite busy at the time. Samba 3.0.24 on
CentOS 4.6/x86_64.

What is also interesting is that changing 'socket options' to a variety of
different values has no effect whatsoever on the performance (and I did
restart smbd each time).

Steve
Jeremy Allison
2008-08-29 04:25:06 UTC
Permalink
Post by Steve Thompson
Post by Jeremy Allison
Post by Steve Thompson
BTW, I get 43 MB/s with a single 12 MB file on GbE without any socket
options; linux -> linux.
Ok, that's the same as the Windows systems right ? Should be
higher than that. Ok, at least we've removed the black box
from the system - everything can be examined in open source
code now. You should be able to get 100MB/sec (or close to
it) I think.
*sound of stick hitting head*
My destination filesystem was an NFS-mounted volume. Copying instead to a
local software RAID-1 volume, I get 63 MB/sec, which I think is quite
reasonable. The systems were quite busy at the time. Samba 3.0.24 on
CentOS 4.6/x86_64.
Ah, twice over the network. Always good for performance :-).

Glad to be able to help.

Jeremy.
Lars Müller
2008-08-29 16:37:08 UTC
Permalink
Post by Mike Myers
Hi everyone. I am running Samba 3.2.0-22.1 (as packaged by OpenSUSE in
11.0) on a storage server connected to multiple windows based clients
over a gigabit ethernet link. The server is a quad core Intel CPU and
is equipped with an Intel e1000 based gigabit ethernet controller and plugged into a common gigabit ethernet switch with the windows clients.
Is this still the case?

In addition to Jeremy's suggestion I'd like to see you testing this with
a straight connection between the two systems. But after reading the
rest of your posting it's not very liekly to help.

For openSUSE 11.0 we'll also soon see an official update to 3.2.3.

Meanwhile you might use the packages provided by the openSUSE Build
Service. See http://en.opensuse.org/Samba#openSUSE_Build_Service for
how to access them.
Post by Mike Myers
Samba is getting roughly 30 MB/s tranfer rates from the linux server to a windows vista and a windows XP client,
and the disks on both windows machines are RAID0 (4 and 2 disk RAID0
sets respectively), so I don't think I am running into filesystem
performance issues on the target. Moving from the windows systems to Samba, I see about 45 MB/sec transfers rates.
The write case is faster than read. Strange to me.
Post by Mike Myers
The
raid array on the samba server consist of 2 6 disk raid5 sets with fast
disks on them, running lvm and XFS for a filesysteem. I can do a dd of
a multigigabyte file to /dev/null and get roughly 500-600 MB/'s
transfer rates through the filesystem, so I don't think the raid array
and file system is a bottleneck.
Do you have some space on a local disk with ext2, no raid, no lvm? Only
to ensure your issue isn't caused by one of these components.
Post by Mike Myers
I have run netperf tests
between the server and the clients to see if I had some network
plumbing problems. With default socket settings for netperf (8182
buffer size), I get about 300 mbps transfer rates between the clients
and the server (which matches approximately the 30 MB/s transfer
rates). With 65536 byte buffers, that number goes to 970 or so Mbps,
so I think the interface cards, TCP stack, switches are all ok.
Therefore this might be caused by the file system in use. Please try to
test the way as suggested above. If it isn't solved already.

Lars
--
Lars M?ller [?la?(r)z ?m?l?]
Samba Team
SUSE Linux, Maxfeldstra?e 5, 90409 N?rnberg, Germany
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://lists.samba.org/archive/samba/attachments/20080829/59997dfb/attachment.bin
Mike Myers
2008-08-29 22:31:07 UTC
Permalink
I still see the problem. I don't have another real linux system to test, but will install another copy on a different system and configure it for dual boot. I can't really do that easily on on the same hardware as the existing windows machines with raid, since there is no non-raid boot partition on those. I do have a laptop and some other machines that I can do dual-boot on, but I am concerned that the local filesystem will be a bottleneck as it's just a single disk. My experience is that you really can't test the maxmium speed for a server with a fast raid configuration to a client that has a non-raid disk - the local target for the file just can't keep up, but it may be able to go faster than 30 MB/s...

It may take me another day or so to ready such a configuration for test.

I will go ahead and set up a new samba share off the system disk which isn't part of the raid5 sets, and uses resierfs and try that too. The disk I use should be able to do north of 60 MB/s in raw transfer rate on sequential reads, so if resier doesn't get too much in the way, it should be able to supply samba at a rate of more than the 30 MB/s I see for transfers off the raid volume.

I am still curious as to why changing the socket buffer settings to a wide variety of options has absolutely no effect on perfromance. Steve Thompson seems to have noticed the same issue, and my experience tells me that if I set the SO_SNDBUF and SO_RCVBUF to something like 512 on an app, that it slaughters performance. It certainly does for netperf, so I can't understand if Samba is really using those options why setting it to 512 doesn't slow the speed down like it does for netperf.

I don't mean to be presumptuous, but these data points re: no speed variance even in the face of tiny buffer settings indicates that something is wrong with the options being exercised, so I am curious as to why you think the options are being used? Jeremy and Lars, in your test environment, can you try setting the SO_SNDBUF and SO_RCVBUF down to 512 and see if you also do not notice a variance in performance?

Thanks much,
Mike





----- Original Message ----
From: Lars M?ller <***@suse.de>
To: Mike Myers <***@yahoo.com>
Cc: ***@lists.samba.org
Sent: Friday, August 29, 2008 4:36:30 AM
Subject: Re: [Samba] Samba ignoring socket options?
Post by Mike Myers
Hi everyone. I am running Samba 3.2.0-22.1 (as packaged by OpenSUSE in
11.0) on a storage server connected to multiple windows based clients
over a gigabit ethernet link. The server is a quad core Intel CPU and
is equipped with an Intel e1000 based gigabit ethernet controller and plugged into a common gigabit ethernet switch with the windows clients.
Is this still the case?

In addition to Jeremy's suggestion I'd like to see you testing this with
a straight connection between the two systems. But after reading the
rest of your posting it's not very liekly to help.

For openSUSE 11.0 we'll also soon see an official update to 3.2.3.

Meanwhile you might use the packages provided by the openSUSE Build
Service. See http://en.opensuse.org/Samba#openSUSE_Build_Service for
how to access them.
Post by Mike Myers
Samba is getting roughly 30 MB/s tranfer rates from the linux server to a windows vista and a windows XP client,
and the disks on both windows machines are RAID0 (4 and 2 disk RAID0
sets respectively), so I don't think I am running into filesystem
performance issues on the target. Moving from the windows systems to Samba, I see about 45 MB/sec transfers rates.
The write case is faster than read. Strange to me.
Post by Mike Myers
The
raid array on the samba server consist of 2 6 disk raid5 sets with fast
disks on them, running lvm and XFS for a filesysteem. I can do a dd of
a multigigabyte file to /dev/null and get roughly 500-600 MB/'s
transfer rates through the filesystem, so I don't think the raid array
and file system is a bottleneck.
Do you have some space on a local disk with ext2, no raid, no lvm? Only
to ensure your issue isn't caused by one of these components.
Post by Mike Myers
I have run netperf tests
between the server and the clients to see if I had some network
plumbing problems. With default socket settings for netperf (8182
buffer size), I get about 300 mbps transfer rates between the clients
and the server (which matches approximately the 30 MB/s transfer
rates). With 65536 byte buffers, that number goes to 970 or so Mbps,
so I think the interface cards, TCP stack, switches are all ok.
Therefore this might be caused by the file system in use. Please try to
test the way as suggested above. If it isn't solved already.

Lars
--
Lars M?ller [?la?(r)z ?m?l?]
Samba Team
SUSE Linux, Maxfeldstra?e 5, 90409 N?rnberg, Germany
Mike Myers
2008-08-30 23:54:23 UTC
Permalink
Ok, Tests from the system disk which isn't raided seem to be about the same performance as from the raid volume.

One other item of note, the system has 8 GB of ram in it, so I notice if I dd a 1 GB file file to /dev/null on the local system, it runs about 500-600 MB/s transfer rate. If I do it again shortly afterwards, I get about 3 GB/s transfer rate, no doubt because of the disk and filesystem caching going on ( the whole file gets buffered in RAM).

The point is that I doubt samba is running into local filesystem performance problems, as I repeatedly transfer the same 1 GB file from the server to the clients, so I expect it's all cached in RAM during most of the transfers.

I'll go ahead and get my laptop configured to do dual boot an d see what an SMB client sees...

Thanks,
Mike




----- Original Message ----
From: Lars M?ller <***@suse.de>
To: Mike Myers <***@yahoo.com>
Cc: ***@lists.samba.org
Sent: Friday, August 29, 2008 4:36:30 AM
Subject: Re: [Samba] Samba ignoring socket options?
Post by Mike Myers
Hi everyone. I am running Samba 3.2.0-22.1 (as packaged by OpenSUSE in
11.0) on a storage server connected to multiple windows based clients
over a gigabit ethernet link. The server is a quad core Intel CPU and
is equipped with an Intel e1000 based gigabit ethernet controller and plugged into a common gigabit ethernet switch with the windows clients.
Is this still the case?

In addition to Jeremy's suggestion I'd like to see you testing this with
a straight connection between the two systems. But after reading the
rest of your posting it's not very liekly to help.

For openSUSE 11.0 we'll also soon see an official update to 3.2.3.

Meanwhile you might use the packages provided by the openSUSE Build
Service. See http://en.opensuse.org/Samba#openSUSE_Build_Service for
how to access them.
Post by Mike Myers
Samba is getting roughly 30 MB/s tranfer rates from the linux server to a windows vista and a windows XP client,
and the disks on both windows machines are RAID0 (4 and 2 disk RAID0
sets respectively), so I don't think I am running into filesystem
performance issues on the target. Moving from the windows systems to Samba, I see about 45 MB/sec transfers rates.
The write case is faster than read. Strange to me.
Post by Mike Myers
The
raid array on the samba server consist of 2 6 disk raid5 sets with fast
disks on them, running lvm and XFS for a filesysteem. I can do a dd of
a multigigabyte file to /dev/null and get roughly 500-600 MB/'s
transfer rates through the filesystem, so I don't think the raid array
and file system is a bottleneck.
Do you have some space on a local disk with ext2, no raid, no lvm? Only
to ensure your issue isn't caused by one of these components.
Post by Mike Myers
I have run netperf tests
between the server and the clients to see if I had some network
plumbing problems. With default socket settings for netperf (8182
buffer size), I get about 300 mbps transfer rates between the clients
and the server (which matches approximately the 30 MB/s transfer
rates). With 65536 byte buffers, that number goes to 970 or so Mbps,
so I think the interface cards, TCP stack, switches are all ok.
Therefore this might be caused by the file system in use. Please try to
test the way as suggested above. If it isn't solved already.

Lars
--
Lars M?ller [?la?(r)z ?m?l?]
Samba Team
SUSE Linux, Maxfeldstra?e 5, 90409 N?rnberg, Germany
Loading...