Discussion:
[Samba] aio settings for samba 4.3
Russell R Poyner
2016-07-19 17:49:09 UTC
Permalink
I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
settings.

I've loaded the freebsd aio kernel module and tried various values or
aio read size and aio write size, but it seems to make no difference in
the speed.

Using MS diskspd against a samba share from a fast zfs pool I get
something like 25MB/s tops. That's well below the capacity of my Gb
network and my disk system. FWIW iperf shows >900Mbits/sec in both
directions on the link.

# smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_posix_init
vfs_aio_pthread_init

As always google finds lots of tuning advice, but it's not clear what if
any of it applies to 4.3 on FreeBSD.

Thanks
Russ Poyner
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Russell R Poyner
2016-07-20 21:47:46 UTC
Permalink
Jeremy,

I re-built the samba43 port with aio_support unset and pthreadpool set.

smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_pthread_init

from smb4.conf:

oplocks = yes
kernel oplocks = no
smb2 leases = yes
server min protocol = smb2

aio read size = 1024
aio write size = 1024
vfs objects = aio_pthread


The new build gives pretty much the same speeds as before using MS
diskspd under windows 7. Around 15MB/s for a random workload with 4k
blocks. Up to about 25MB/s with 64k blocks. For comparison I ran diskspd
on the windows 7 box using a windows 8 machine as the server. The
results were 49MB/s for 4k blocks and 1787MB/s for 64k blocks. Clearly
1787 MB/s is more than wire speed and reflects a cache effect that samba
doesn't access.

With the new build I do see smbd spawning extra threads under load. They
just don't seem to add any performance benefit.

This is a freebsd 10.2 system with samba running inside a jail.

Thanks again
Russ poyner
Post by Russell R Poyner
I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
settings.
I've loaded the freebsd aio kernel module and tried various values
or aio read size and aio write size, but it seems to make no
difference in the speed.
Using MS diskspd against a samba share from a fast zfs pool I get
something like 25MB/s tops. That's well below the capacity of my Gb
network and my disk system. FWIW iperf shows >900Mbits/sec in both
directions on the link.
# smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_posix_init
vfs_aio_pthread_init
You don't need these, modern Samba includes a pthread pool
implementation that will parallelize SMB io requests.
Ensure the clients are using SMB2 and SMB2 leases are
enabled. You should be able to get close to wirespeed.
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Russell R Poyner
2016-07-21 15:57:16 UTC
Permalink
Volker,

Today I built samba43 with support for both posix and pthread aio. I
then ran the diskspd tests using either vfs objects = aio_pthread or vfs
objects = aio_posix.

There appears to be a very small advantage to the aio_pthread
implementation. Hard to say for sure given the run to run variation in
the numbers.

I'm left to conclude that something else in samba is limiting
performance. The best I've been able to measure was 48MB/s using 64k
blocks against a memdisk on the FreeBSD server. Still only around 1/2 of
wire speed as measured by iperf, and much less than I see running the
same test against a windows 8.1 server sharing from a single 7200rpm disk.

Any suggestions on to where to look next are welcome.

Thanks again
Russ Poyner
Post by Russell R Poyner
I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
settings.
I've loaded the freebsd aio kernel module and tried various values
or aio read size and aio write size, but it seems to make no
difference in the speed.
Using MS diskspd against a samba share from a fast zfs pool I get
something like 25MB/s tops. That's well below the capacity of my Gb
network and my disk system. FWIW iperf shows >900Mbits/sec in both
directions on the link.
# smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_posix_init
vfs_aio_pthread_init
You don't need these, modern Samba includes a pthread pool
implementation that will parallelize SMB io requests.
The main reason for our user-space threaded approach is lack of aio in
Linux. Proper kernel support for posix AIO might be faster than our
implementation. "vfs objects = aio_posix" will give you that.
This needs very thorough performance testing. If it turns out to be
faster than our threaded aio on FreeBSD, we might have to revive
the aio_posix module, it went away last year.
Volker
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Russell R Poyner
2016-07-21 17:23:01 UTC
Permalink
One more data point for comparison

I installed the stock samba 4.2 rpm on a centos 7 machine and ran the
same diskspd tests against a share configured with:
vfs objects = aio_pthread
aio read size = 1024
aio read size = 1024

smb2 leases = yes

I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
cacheing by passing the -h switch to diskspd lowered these to 72MB/s and
11MB/s. Which I view as 'close enough' to wire speed. Thus it seems that
the dismal performance I see is associated with the FreeBSD
implementation somehow.

Thanks again
RP
Post by Russell R Poyner
I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
settings.
I've loaded the freebsd aio kernel module and tried various values
or aio read size and aio write size, but it seems to make no
difference in the speed.
Using MS diskspd against a samba share from a fast zfs pool I get
something like 25MB/s tops. That's well below the capacity of my Gb
network and my disk system. FWIW iperf shows >900Mbits/sec in both
directions on the link.
# smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_posix_init
vfs_aio_pthread_init
You don't need these, modern Samba includes a pthread pool
implementation that will parallelize SMB io requests.
The main reason for our user-space threaded approach is lack of aio in
Linux. Proper kernel support for posix AIO might be faster than our
implementation. "vfs objects = aio_posix" will give you that.
This needs very thorough performance testing. If it turns out to be
faster than our threaded aio on FreeBSD, we might have to revive
the aio_posix module, it went away last year.
Volker
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Achim Gottinger
2016-07-21 17:38:51 UTC
Permalink
Post by Russell R Poyner
One more data point for comparison
I installed the stock samba 4.2 rpm on a centos 7 machine and ran the
vfs objects = aio_pthread
aio read size = 1024
aio read size = 1024
smb2 leases = yes
I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
cacheing by passing the -h switch to diskspd lowered these to 72MB/s
and 11MB/s. Which I view as 'close enough' to wire speed. Thus it
seems that the dismal performance I see is associated with the FreeBSD
implementation somehow.
Thanks again
RP
Post by Russell R Poyner
I'm tuning a samba 4.3 install on freebsd and I'm confused about aio
settings.
I've loaded the freebsd aio kernel module and tried various values
or aio read size and aio write size, but it seems to make no
difference in the speed.
Using MS diskspd against a samba share from a fast zfs pool I get
something like 25MB/s tops. That's well below the capacity of my Gb
network and my disk system. FWIW iperf shows >900Mbits/sec in both
directions on the link.
# smbd -b|grep aio
vfs_aio_fork_init
vfs_aio_posix_init
vfs_aio_pthread_init
You don't need these, modern Samba includes a pthread pool
implementation that will parallelize SMB io requests.
The main reason for our user-space threaded approach is lack of aio in
Linux. Proper kernel support for posix AIO might be faster than our
implementation. "vfs objects = aio_posix" will give you that.
This needs very thorough performance testing. If it turns out to be
faster than our threaded aio on FreeBSD, we might have to revive
the aio_posix module, it went away last year.
Volker
I think you must tune zfs on freebsd and assume you used an different fs
for the centos test.
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Russell R Poyner
2016-07-21 18:56:19 UTC
Permalink
Jeremy,

I think this is exactly a complex interaction between FreeBSD and Samba.
Best guess would be some system call that is fast in linux but slow in
FreeBSD holding things back.

Russ
Post by Russell R Poyner
One more data point for comparison
I installed the stock samba 4.2 rpm on a centos 7 machine and ran
vfs objects = aio_pthread
aio read size = 1024
aio read size = 1024
smb2 leases = yes
I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
cacheing by passing the -h switch to diskspd lowered these to 72MB/s
and 11MB/s. Which I view as 'close enough' to wire speed. Thus it
seems that the dismal performance I see is associated with the
FreeBSD implementation somehow.
That's interesting, but I'm afraid I don't know FreeBSD well
enough to help here. This does imply the problem isn't Samba
specific though (unless it's a complex interaction between
Samba+FreeBSD).
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Achim Gottinger
2016-07-21 20:50:16 UTC
Permalink
Post by Russell R Poyner
Jeremy,
I think this is exactly a complex interaction between FreeBSD and
Samba. Best guess would be some system call that is fast in linux but
slow in FreeBSD holding things back.
Russ
Post by Russell R Poyner
One more data point for comparison
I installed the stock samba 4.2 rpm on a centos 7 machine and ran
vfs objects = aio_pthread
aio read size = 1024
aio read size = 1024
smb2 leases = yes
I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
cacheing by passing the -h switch to diskspd lowered these to 72MB/s
and 11MB/s. Which I view as 'close enough' to wire speed. Thus it
seems that the dismal performance I see is associated with the
FreeBSD implementation somehow.
That's interesting, but I'm afraid I don't know FreeBSD well
enough to help here. This does imply the problem isn't Samba
specific though (unless it's a complex interaction between
Samba+FreeBSD).
On my debian jessie server with zfs these settings seem to work.

max xmit = 65536
socket options = TCP_NODELAY

Copying an file from server to a windows 7 client increases from 50% to
80% network utilisation on an 1GB link without jumbo frames when i add
these settings.
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Achim Gottinger
2016-07-21 21:15:27 UTC
Permalink
Post by Achim Gottinger
Post by Russell R Poyner
Jeremy,
I think this is exactly a complex interaction between FreeBSD and
Samba. Best guess would be some system call that is fast in linux
but slow in FreeBSD holding things back.
Russ
Post by Russell R Poyner
One more data point for comparison
I installed the stock samba 4.2 rpm on a centos 7 machine and ran
vfs objects = aio_pthread
aio read size = 1024
aio read size = 1024
smb2 leases = yes
I get 27MB/s with 4k blocks and 145MB/s with 64k blocks. Disabling
cacheing by passing the -h switch to diskspd lowered these to 72MB/s
and 11MB/s. Which I view as 'close enough' to wire speed. Thus it
seems that the dismal performance I see is associated with the
FreeBSD implementation somehow.
That's interesting, but I'm afraid I don't know FreeBSD well
enough to help here. This does imply the problem isn't Samba
specific though (unless it's a complex interaction between
Samba+FreeBSD).
On my debian jessie server with zfs these settings seem to work.
max xmit = 65536
socket options = TCP_NODELAY
Copying an file from server to a windows 7 client increases from 50%
to 80% network utilisation on an 1GB link without jumbo frames when
i add these settings.
That's kind of voodoo I'm afraid. By default Samba sets
TCP_NODELAY, and max xmit is only used on SMB1, and modern
Samba and Windows 7 should only be using SMB2.
Thank you for the clarification, finaly gotta drop TCP_NODELAY from
configs now.
Indeed the connection is using smb2_10. I assume some intermediate
caching influenced the copy test.

Tested it a few times with these settings enabled/disabled before i
posted but still...
Now im at 80% without these settings.

It is mentioned here (together with the abandonen IPTOS_LOWDELAY)
http://unicolet.blogspot.de/2013/03/a-not-so-short-guide-to-zfs-on-linux.html

I assume

zfs set xattr=sa tank/fish
zfs set atime=off tank/fish

are an good idea on freebsd as well.
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Russell R Poyner
2016-07-22 13:30:42 UTC
Permalink
Thanks for all the zfs tuning tips. The point is a good one.

I was concerned that zfs performance might be limiting, and posted tests
run against a ufs formatted ram disk for that reason. The tests with the
ram disk are slightly faster than the zfs backed tests, but still slower
than tests run with samba on linux using xfs on a single hard disk.

Russ Poyner
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/options/samba
Loading...