Post by Quinn FisslerSorry I haven't answered your question. I would go and check the docs
or search the code.
Hi, you wrote a lot of text and gives me more idea to look around the
problem :)
Post by Quinn FisslerWhen you say "accessing a folder" do you really mean browsing in Explorer?
accessing was quite the wrong word. Browsing the folder (Box and Client
connected through Gigabit-Ethernet with very low load) produced little
delay, but works with acceptable speed.
The load was created, when cutting of a number of files, e.g. 1000 files
and moving them into another folder on another drive. The files were
copied in acceptable time and after all files were copied that the high
load was recognized. I think, the problem is the file table refresh.
Post by Quinn FisslerIf you're using Explorer, does it know they're text files? Do they
have .txt extension?
Yes, only .txt files.
Post by Quinn FisslerIf it doesn't - if it has an extension which requires it - it could
open each one to find out what it is - to generate a thumbnail etc.
I've just created a directory containing 60,000 small text files with
names of the form "textfile${N}.txt" and can navigate it with only
momentary delays. I must admit, I'm using a dual xeon with fast scsi
raid.
The interesting thing for me was that when I hit 32768 files in my
creation loop, there was a delay of about a second - possibly as an
extra inode was allocated to the directory list.
When you access a large directory in samba, one cpu intensive part of
the process is likely to be the mapping of user names and permissions.
Do you have many users?
The CPU-load was created, when samba refreshes the file table i think.
At this time, only 2 Users were logged in on the box.
Post by Quinn FisslerI've just doubled the size of my directory (120,000 small text files)
and it still performs quite well - when accessing from Explorer over a
samba share (3.0.24) I see a burst of activity on the Linux box and
then a long delay as Windows arranges the list.
Will the NAS manufacturer be able to give you more info?
I didn't concact them, because my first idea was a limit of 65000 files,
but i found nothing about that in documentation.
Post by Quinn FisslerA common way around the problem of large directories is to use
subdirectories based on the first letter or digit from the filename.
No subdirectories are not possible, but we solved the problem by moving
some thousand files on backup tape. Using subdirs means to rewrite our
software which handles our manufactoring process and this is quite
difficult. We dont need these old files for out automated processes.
Regards,
Markus
Post by Quinn FisslerThis is only good to you if you can get to the code of your
application and the files names do not change.
Regards,
Quinn
Post by lists at dieitexperten.de ()Hello list,
we have a small NAS-Box here in our office, running Linux 2.6.13 and
Samba 3 (exactly version string is not avalilable for me at moment).
Is there a limit, how many files samba will store in one folder? We
recognize a massive CPU-Load of the smbd-process, when accessing a
folder which stores round about 60 000 small text-files.
Is this a samba-Limit or a bug? The kernel and samba is compiled by the
NAS-manufactur, so no cimpiler-options are available for me.
So long,
Markus
--
Markus Neviadomski
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/listinfo/samba