summaryrefslogtreecommitdiff
path: root/docs/docbook/projdoc/Speed.xml
diff options
context:
space:
mode:
Diffstat (limited to 'docs/docbook/projdoc/Speed.xml')
-rw-r--r--docs/docbook/projdoc/Speed.xml98
1 files changed, 73 insertions, 25 deletions
diff --git a/docs/docbook/projdoc/Speed.xml b/docs/docbook/projdoc/Speed.xml
index e2ede62ac7..659cd6e31b 100644
--- a/docs/docbook/projdoc/Speed.xml
+++ b/docs/docbook/projdoc/Speed.xml
@@ -9,9 +9,10 @@
</affiliation>
</author>
&author.jelmer;
+ &author.jht;
</chapterinfo>
-<title>Samba performance issues</title>
+<title>Samba Performance Tuning</title>
<sect1>
<title>Comparisons</title>
@@ -28,7 +29,7 @@ SMB server.
If you want to test against something like a NT or WfWg server then
you will have to disable all but TCP on either the client or
server. Otherwise you may well be using a totally different protocol
-(such as Netbeui) and comparisons may not be valid.
+(such as NetBEUI) and comparisons may not be valid.
</para>
<para>
@@ -58,11 +59,11 @@ performance of a TCP based server like Samba.
<para>
The socket options that Samba uses are settable both on the command
-line with the -O option, or in the smb.conf file.
+line with the <option>-O</option> option, or in the &smb.conf; file.
</para>
<para>
-The <command>socket options</command> section of the &smb.conf; manual page describes how
+The <parameter>socket options</parameter> section of the &smb.conf; manual page describes how
to set these and gives recommendations.
</para>
@@ -75,7 +76,7 @@ much. The correct settings are very dependent on your local network.
<para>
The socket option TCP_NODELAY is the one that seems to make the
biggest single difference for most networks. Many people report that
-adding <command>socket options = TCP_NODELAY</command> doubles the read
+adding <parameter>socket options = TCP_NODELAY</parameter> doubles the read
performance of a Samba drive. The best explanation I have seen for this is
that the Microsoft TCP/IP stack is slow in sending tcp ACKs.
</para>
@@ -86,7 +87,7 @@ that the Microsoft TCP/IP stack is slow in sending tcp ACKs.
<title>Read size</title>
<para>
-The option <command>read size</command> affects the overlap of disk
+The option <parameter>read size</parameter> affects the overlap of disk
reads/writes with network reads/writes. If the amount of data being
transferred in several of the SMB commands (currently SMBwrite, SMBwriteX and
SMBreadbraw) is larger than this value then the server begins writing
@@ -114,9 +115,9 @@ pointless and will cause you to allocate memory unnecessarily.
<title>Max xmit</title>
<para>
-At startup the client and server negotiate a <command>maximum transmit</command> size,
+At startup the client and server negotiate a <parameter>maximum transmit</parameter> size,
which limits the size of nearly all SMB commands. You can set the
-maximum size that Samba will negotiate using the <command>max xmit = </command> option
+maximum size that Samba will negotiate using the <parameter>max xmit = </parameter> option
in &smb.conf;. Note that this is the maximum size of SMB requests that
Samba will accept, but not the maximum size that the *client* will accept.
The client maximum receive size is sent to Samba by the client and Samba
@@ -139,7 +140,7 @@ In most cases the default is the best option.
<title>Log level</title>
<para>
-If you set the log level (also known as <command>debug level</command>) higher than 2
+If you set the log level (also known as <parameter>debug level</parameter>) higher than 2
then you may suffer a large drop in performance. This is because the
server flushes the log file after each operation, which can be very
expensive.
@@ -150,20 +151,20 @@ expensive.
<title>Read raw</title>
<para>
-The <command>read raw</command> operation is designed to be an optimised, low-latency
+The <parameter>read raw</parameter> operation is designed to be an optimised, low-latency
file read operation. A server may choose to not support it,
-however. and Samba makes support for <command>read raw</command> optional, with it
+however. and Samba makes support for <parameter>read raw</parameter> optional, with it
being enabled by default.
</para>
<para>
-In some cases clients don't handle <command>read raw</command> very well and actually
+In some cases clients don't handle <parameter>read raw</parameter> very well and actually
get lower performance using it than they get using the conventional
read operations.
</para>
<para>
-So you might like to try <command>read raw = no</command> and see what happens on your
+So you might like to try <parameter>read raw = no</parameter> and see what happens on your
network. It might lower, raise or not affect your performance. Only
testing can really tell.
</para>
@@ -174,14 +175,14 @@ testing can really tell.
<title>Write raw</title>
<para>
-The <command>write raw</command> operation is designed to be an optimised, low-latency
+The <parameter>write raw</parameter> operation is designed to be an optimised, low-latency
file write operation. A server may choose to not support it,
-however. and Samba makes support for <command>write raw</command> optional, with it
+however. and Samba makes support for <parameter>write raw</parameter> optional, with it
being enabled by default.
</para>
<para>
-Some machines may find <command>write raw</command> slower than normal write, in which
+Some machines may find <parameter>write raw</parameter> slower than normal write, in which
case you may wish to change this option.
</para>
@@ -192,31 +193,78 @@ case you may wish to change this option.
<para>
Slow logins are almost always due to the password checking time. Using
-the lowest practical <command>password level</command> will improve things.
+the lowest practical <parameter>password level</parameter> will improve things.
</para>
</sect1>
<sect1>
-<title>LDAP</title>
+<title>Client tuning</title>
<para>
-LDAP can be vastly improved by using the
-<ulink url="smb.conf.5.html#LDAPTRUSTIDS">ldap trust ids</ulink> parameter.
+Often a speed problem can be traced to the client. The client (for
+example Windows for Workgroups) can often be tuned for better TCP
+performance. Check the sections on the various clients in
+<link linkend="Other-Clients">Samba and Other Clients</link>.
</para>
</sect1>
+<sect1>
+<title>Samba performance problem due changing kernel</title>
+
+<para>
+Hi everyone. I am running Gentoo on my server and samba 2.2.8a. Recently
+I changed kernel version from linux-2.4.19-gentoo-r10 to
+linux-2.4.20-wolk4.0s. And now I have performance issue with samba. Ok
+many of you will probably say that move to vanilla sources...well I tried
+it too and it didn't work. I have 100mb LAN and two computers (linux +
+Windows2000). Linux server shares directory with DivX files, client
+(windows2000) plays them via LAN. Before when I was running 2.4.19 kernel
+everything was fine, but now movies freezes and stops...I tried moving
+files between server and Windows and it's terribly slow.
+</para>
+
+<para>
+Grab mii-tool and check the duplex settings on the NIC.
+My guess is that it is a link layer issue, not an application
+layer problem. Also run ifconfig and verify that the framing
+error, collisions, etc... look normal for ethernet.
+</para>
+
+</sect1>
<sect1>
-<title>Client tuning</title>
+<title>Corrupt tdb Files</title>
<para>
-Often a speed problem can be traced to the client. The client (for
-example Windows for Workgroups) can often be tuned for better TCP
-performance. Check the sections on the various clients in
-<link linkend="Other-Clients">Samba and Other Clients</link>.
+Well today it happened, Our first major problem using samba.
+Our samba PDC server has been hosting 3 TB of data to our 500+ users
+[Windows NT/XP] for the last 3 years using samba, no problem.
+But today all shares went SLOW; very slow. Also the main smbd kept
+spawning new processes so we had 1600+ running smbd's (normally we avg. 250).
+It crashed the SUN E3500 cluster twice. After a lot of searching I
+decided to <command>rm /var/locks/*.tdb</command>. Happy again.
+</para>
+
+<para>
+Q1) Is there any method of keeping the *.tdb files in top condition or
+how to early detect corruption?
+</para>
+
+<para>
+A1) Yes, run <command>tdbbackup</command> each time after stopping nmbd and before starting nmbd.
+</para>
+
+<para>
+Q2) What I also would like to mention is that the service latency seems
+a lot lower then before the locks cleanup, any ideas on keeping it top notch?
+</para>
+
+<para>
+A2) Yes! Same answer as for Q1!
</para>
</sect1>
+
</chapter>