Table of Contents
One area which causes trouble for many network administrators is locking. The extent of the problem is readily evident from searches over the internet.
Samba provides all the same locking semantics that MS Windows clients expect and that MS Windows NT4 / 200x servers provide also.
The term locking has exceptionally broad meaning and covers a range of functions that are all categorized under this one term.
Opportunistic locking is a desirable feature when it can enhance the perceived performance of applications on a networked client. However, the opportunistic locking protocol is not robust, and therefore can encounter problems when invoked beyond a simplistic configuration, or on extended, slow, or faulty networks. In these cases, operating system management of opportunistic locking and/or recovering from repetitive errors can offset the perceived performance advantage that it is intended to provide.
The MS Windows network administrator needs to be aware that file and record locking semantics (behaviour) can be controlled either in Samba or by way of registry settings on the MS Windows client.
Sometimes it is necessary to disable locking control settings BOTH on the Samba server as well as on each MS Windows client!
There are two types of locking which need to be performed by a SMB server. The first is record locking which allows a client to lock a range of bytes in a open file. The second is the deny modes that are specified when a file is open.
Record locking semantics under Unix are very different from record locking under Windows. Versions of Samba before 2.2 have tried to use the native fcntl() unix system call to implement proper record locking between different Samba clients. This can not be fully correct due to several reasons. The simplest is the fact that a Windows client is allowed to lock a byte range up to 2^32 or 2^64, depending on the client OS. The unix locking only supports byte ranges up to 2^31. So it is not possible to correctly satisfy a lock request above 2^31. There are many more differences, too many to be listed here.
Samba 2.2 and above implements record locking completely independent of the underlying unix system. If a byte range lock that the client requests happens to fall into the range 0-2^31, Samba hands this request down to the Unix system. All other locks can not be seen by unix anyway.
Strictly a SMB server should check for locks before every read and write call on a file. Unfortunately with the way fcntl() works this can be slow and may overstress the rpc.lockd. It is also almost always unnecessary as clients are supposed to independently make locking calls before reads and writes anyway if locking is important to them. By default Samba only makes locking calls when explicitly asked to by a client, but if you set strict locking = yes then it will make lock checking calls on every read and write.
You can also disable byte range locking completely using locking = no. This is useful for those shares that don't support locking or don't need it (such as cdroms). In this case Samba fakes the return codes of locking calls to tell clients that everything is OK.
The second class of locking is the deny modes. These are set by an application when it opens a file to determine what types of access should be allowed simultaneously with its open. A client may ask for DENY_NONE, DENY_READ, DENY_WRITE or DENY_ALL. There are also special compatibility modes called DENY_FCB and DENY_DOS.
Opportunistic locking (Oplocks) is invoked by the Windows file system (as opposed to an API) via registry entries (on the server AND client) for the purpose of enhancing network performance when accessing a file residing on a server. Performance is enhanced by caching the file locally on the client which allows:
The client reads the local copy of the file, eliminating network latency
The client writes to the local copy of the file, eliminating network latency
The client caches application locks locally, eliminating network latency
The performance enhancement of oplocks is due to the opportunity of exclusive access to the file - even if it is opened with deny-none - because Windows monitors the file's status for concurrent access from other processes.
Windows defines 4 kinds of Oplocks:
The redirector sees that the file was opened with deny none (allowing concurrent access), verifies that no other process is accessing the file, checks that oplocks are enabled, then grants deny-all/read-write/exclusive access to the file. The client now performs operations on the cached local file.
If a second process attempts to open the file, the open is deferred while the redirector "breaks" the original oplock. The oplock break signals the caching client to write the local file back to the server, flush the local locks, and discard read-ahead data. The break is then complete, the deferred open is granted, and the multiple processes can enjoy concurrent file access as dictated by mandatory or byte-range locking options. However, if the original opening process opened the file with a share mode other than deny-none, then the second process is granted limited or no access, despite the oplock break.
Performs like a level1 oplock, except caching is only operative for reads. All other operations are performed on the server disk copy of the file.
Does not allow write or delete file access
Manipulates file openings and closings - allows caching of file attributes
An important detail is that oplocks are invoked by the file system, not an application API. Therefore, an application can close an oplocked file, but the file system does not relinquish the oplock. When the oplock break is issued, the file system then simply closes the file in preparation for the subsequent open by the second process.
Opportunistic Locking is actually an improper name for this feature. The true benefit of this feature is client-side data caching, and oplocks is merely a notification mechanism for writing data back to the networked storage disk. The limitation of opportunistic locking is the reliability of the mechanism to process an oplock break (notification) between the server and the caching client. If this exchange is faulty (usually due to timing out for any number of reasons) then the client-side caching benefit is negated.
The actual decision that a user or administrator should consider is whether it is sensible to share amongst multiple users data that will be cached locally on a client. In many cases the answer is no. Deciding when to cache or not cache data is the real question, and thus "opportunistic locking" should be treated as a toggle for client-side caching. Turn it "ON" when client-side caching is desirable and reliable. Turn it "OFF" when client-side caching is redundant, unreliable, or counter-productive.
Opportunistic locking is by default set to "on" by Samba on all configured shares, so careful attention should be given to each case to determine if the potential benefit is worth the potential for delays. The following recommendations will help to characterize the environment where opportunistic locking may be effectively configured.
Windows Opportunistic Locking is a lightweight performance-enhancing feature. It is not a robust and reliable protocol. Every implementation of Opportunistic Locking should be evaluated as a tradeoff between perceived performance and reliability. Reliability decreases as each successive rule above is not enforced. Consider a share with oplocks enabled, over a wide area network, to a client on a South Pacific atoll, on a high-availability server, serving a mission-critical multi-user corporate database, during a tropical storm. This configuration will likely encounter problems with oplocks.
Oplocks can be beneficial to perceived client performance when treated as a configuration toggle for client-side data caching. If the data caching is likely to be interrupted, then oplock usage should be reviewed. Samba enables opportunistic locking by default on all shares. Careful attention should be given to the client usage of shared data on the server, the server network reliability, and the opportunistic locking configuration of each share. n mission critical high availability environments, data integrity is often a priority. Complex and expensive configurations are implemented to ensure that if a client loses connectivity with a file server, a failover replacement will be available immediately to provide continuous data availability.
Windows client failover behavior is more at risk of application interruption than other platforms because it is dependant upon an established TCP transport connection. If the connection is interrupted - as in a file server failover - a new session must be established. It is rare for Windows client applications to be coded to recover correctly from a transport connection loss, therefore most applications will experience some sort of interruption - at worst, abort and require restarting.
If a client session has been caching writes and reads locally due to opportunistic locking, it is likely that the data will be lost when the application restarts, or recovers from the TCP interrupt. When the TCP connection drops, the client state is lost. When the file server recovers, an oplock break is not sent to the client. In this case, the work from the prior session is lost. Observing this scenario with oplocks disabled, and the client was writing data to the file server real-time, then the failover will provide the data on disk as it existed at the time of the disconnect.
In mission critical high availability environments, careful attention should be given to opportunistic locking. Ideally, comprehensive testing should be done with all affected applications with oplocks enabled and disabled.
Opportunistic locking is most effective when it is confined to shares that are exclusively accessed by a single user, or by only one user at a time. Because the true value of opportunistic locking is the local client caching of data, any operation that interrupts the caching mechanism will cause a delay.
Home directories are the most obvious examples of where the performance benefit of opportunistic locking can be safely realized.
As each additional user accesses a file in a share with opportunistic locking enabled, the potential for delays and resulting perceived poor performance increases. When multiple users are accessing a file on a share that has oplocks enabled, the management impact of sending and receiving oplock breaks, and the resulting latency while other clients wait for the caching client to flush data, offset the performance gains of the caching user.
As each additional client attempts to access a file with oplocks set, the potential performance improvement is negated and eventually results in a performance bottleneck.
Local Unix and NFS clients access files without a mandatory file locking mechanism. Thus, these client platforms are incapable of initiating an oplock break request from the server to a Windows client that has a file cached. Local Unix or NFS file access can therefore write to a file that has been cached by a Windows client, which exposes the file to likely data corruption.
If files are shared between Windows clients, and either local Unix or NFS users, then turn opportunistic locking off.
The biggest potential performance improvement for opportunistic locking occurs when the client-side caching of reads and writes delivers the most differential over sending those reads and writes over the wire. This is most likely to occur when the network is extremely slow, congested, or distributed (as in a WAN). However, network latency also has a very high impact on the reliability of the oplock break mechanism, and thus increases the likelihood of encountering oplock problems that more than offset the potential perceived performance gain. Of course, if an oplock break never has to be sent, then this is the most advantageous scenario to utilize opportunistic locking.
If the network is slow, unreliable, or a WAN, then do not configure opportunistic locking if there is any chance of multiple users regularly opening the same file.
Multi-user databases clearly pose a risk due to their very nature - they are typically heavily accessed by numerous users at random intervals. Placing a multi-user database on a share with opportunistic locking enabled will likely result in a locking management bottleneck on the Samba server. Whether the database application is developed in-house or a commercially available product, ensure that the share has opportunistic locking disabled.
Process Data Management (PDM) applications such as IMAN, Enovia, and Clearcase, are increasing in usage with Windows client platforms, and therefore SMB data stores. PDM applications manage multi-user environments for critical data security and access. The typical PDM environment is usually associated with sophisticated client design applications that will load data locally as demanded. In addition, the PDM application will usually monitor the data-state of each client. In this case, client-side data caching is best left to the local application and PDM server to negotiate and maintain. It is appropriate to eliminate the client OS from any caching tasks, and the server from any oplock management, by disabling opportunistic locking on the share.
Samba includes an smb.conf parameter called force user that changes the user accessing a share from the incoming user to whatever user is defined by the smb.conf variable. If opportunistic locking is enabled on a share, the change in user access causes an oplock break to be sent to the client, even if the user has not explicitly loaded a file. In cases where the network is slow or unreliable, an oplock break can become lost without the user even accessing a file. This can cause apparent performance degradation as the client continually reconnects to overcome the lost oplock break.
Avoid the combination of the following:
force user in the smb.conf share configuration.
Slow or unreliable networks
Opportunistic Locking Enabled
Samba provides opportunistic locking parameters that allow the administrator to adjust various properties of the oplock mechanism to account for timing and usage levels. These parameters provide good versatility for implementing oplocks in environments where they would likely cause problems. The parameters are: oplock break wait time, oplock contention limit.
For most users, administrators, and environments, if these parameters are required, then the better option is to simply turn oplocks off. The samba SWAT help text for both parameters reads "DO NOT CHANGE THIS PARAMETER UNLESS YOU HAVE READ AND UNDERSTOOD THE SAMBA OPLOCK CODE." This is good advice.
In mission critical high availability environments, data integrity is often a priority. Complex and expensive configurations are implemented to ensure that if a client loses connectivity with a file server, a failover replacement will be available immediately to provide continuous data availability.
Windows client failover behavior is more at risk of application interruption than other platforms because it is dependant upon an established TCP transport connection. If the connection is interrupted - as in a file server failover - a new session must be established. It is rare for Windows client applications to be coded to recover correctly from a transport connection loss, therefore most applications will experience some sort of interruption - at worst, abort and require restarting.
If a client session has been caching writes and reads locally due to opportunistic locking, it is likely that the data will be lost when the application restarts, or recovers from the TCP interrupt. When the TCP connection drops, the client state is lost. When the file server recovers, an oplock break is not sent to the client. In this case, the work from the prior session is lost. Observing this scenario with oplocks disabled, and the client was writing data to the file server real-time, then the failover will provide the data on disk as it existed at the time of the disconnect.
In mission critical high availability environments, careful attention should be given to opportunistic locking. Ideally, comprehensive testing should be done with all affected applications with oplocks enabled and disabled.
Opportunistic Locking is a unique Windows file locking feature. It is not really file locking, but is included in most discussions of Windows file locking, so is considered a defacto locking feature. Opportunistic Locking is actually part of the Windows client file caching mechanism. It is not a particularly robust or reliable feature when implemented on the variety of customized networks that exist in enterprise computing.
Like Windows, Samba implements Opportunistic Locking as a server-side component of the client caching mechanism. Because of the lightweight nature of the Windows feature design, effective configuration of Opportunistic Locking requires a good understanding of its limitations, and then applying that understanding when configuring data access for each particular customized network and client usage state.
Opportunistic locking essentially means that the client is allowed to download and cache a file on their hard drive while making changes; if a second client wants to access the file, the first client receives a break and must synchronise the file back to the server. This can give significant performance gains in some cases; some programs insist on synchronising the contents of the entire file back to the server for a single change.
Level1 Oplocks (aka just plain "oplocks") is another term for opportunistic locking.
Level2 Oplocks provides opportunistic locking for a file that will be treated as read only. Typically this is used on files that are read-only or on files that the client has no initial intention to write to at time of opening the file.
Kernel Oplocks are essentially a method that allows the Linux kernel to co-exist with Samba's oplocked files, although this has provided better integration of MS Windows network file locking with the under lying OS, SGI IRIX and Linux are the only two OS's that are oplock aware at this time.
Unless your system supports kernel oplocks, you should disable oplocks if you are accessing the same files from both Unix/Linux and SMB clients. Regardless, oplocks should always be disabled if you are sharing a database file (e.g., Microsoft Access) between multiple clients, as any break the first client receives will affect synchronisation of the entire file (not just the single record), which will result in a noticeable performance impairment and, more likely, problems accessing the database in the first place. Notably, Microsoft Outlook's personal folders (*.pst) react very badly to oplocks. If in doubt, disable oplocks and tune your system from that point.
If client-side caching is desirable and reliable on your network, you will benefit from turning on oplocks. If your network is slow and/or unreliable, or you are sharing your files among other file sharing mechanisms (e.g., NFS) or across a WAN, or multiple people will be accessing the same files frequently, you probably will not benefit from the overhead of your client sending oplock breaks and will instead want to disable oplocks for the share.
Another factor to consider is the perceived performance of file access. If oplocks provide no measurable speed benefit on your network, it might not be worth the hassle of dealing with them.
In the following we examine two distinct aspects of Samba locking controls.
You can disable oplocks on a per-share basis with the following:
[acctdata] oplocks = False level2 oplocks = False
The default oplock type is Level1. Level2 Oplocks are enabled on a per-share basis in the smb.conf file.
Alternately, you could disable oplocks on a per-file basis within the share:
veto oplock files = /*.mdb/*.MDB/*.dbf/*.DBF/
If you are experiencing problems with oplocks as apparent from Samba's log entries, you may want to play it safe and disable oplocks and level2 oplocks.
Kernel OpLocks is an smb.conf parameter that notifies Samba (if the UNIX kernel has the capability to send a Windows client an oplock break) when a UNIX process is attempting to open the file that is cached. This parameter addresses sharing files between UNIX and Windows with Oplocks enabled on the Samba server: the UNIX process can open the file that is Oplocked (cached) by the Windows client and the smbd process will not send an oplock break, which exposes the file to the risk of data corruption. If the UNIX kernel has the ability to send an oplock break, then the kernel oplocks parameter enables Samba to send the oplock break. Kernel oplocks are enabled on a per-server basis in the smb.conf file.
[global] kernel oplocks = yes
Veto OpLocks is an smb.conf parameter that identifies specific files for which Oplocks are disabled. When a Windows client opens a file that has been configured for veto oplocks, the client will not be granted the oplock, and all operations will be executed on the original file on disk instead of a client-cached file copy. By explicitly identifying files that are shared with UNIX processes, and disabling oplocks for those files, the server-wide Oplock configuration can be enabled to allow Windows clients to utilize the performance benefit of file caching without the risk of data corruption. Veto Oplocks can be enabled on a per-share basis, or globally for the entire server, in the smb.conf file:
<title>Example Veto OpLock Settings</title> [global] veto oplock files = /filename.htm/*.txt/ [share_name] veto oplock files = /*.exe/filename.ext/
Oplock break wait time is an smb.conf parameter that adjusts the time interval for Samba to reply to an oplock break request. Samba recommends "DO NOT CHANGE THIS PARAMETER UNLESS YOU HAVE READ AND UNDERSTOOD THE SAMBA OPLOCK CODE." Oplock Break Wait Time can only be configured globally in the smb.conf file:
[global] oplock break wait time = 0 (default)
Oplock break contention limit is an smb.conf parameter that limits the response of the Samba server to grant an oplock if the configured number of contending clients reaches the limit specified by the parameter. Samba recommends "DO NOT CHANGE THIS PARAMETER UNLESS YOU HAVE READ AND UNDERSTOOD THE SAMBA OPLOCK CODE." Oplock Break Contention Limit can be enable on a per-share basis, or globally for the entire server, in the smb.conf file:
[global] oplock break contention limit = 2 (default) [share_name] oplock break contention limit = 2 (default)
There is a known issue when running applications (like Norton Anti-Virus) on a Windows 2000/ XP workstation computer that can affect any application attempting to access shared database files across a network. This is a result of a default setting configured in the Windows 2000/XP operating system known as Opportunistic Locking. When a workstation attempts to access shared data files located on another Windows 2000/XP computer, the Windows 2000/XP operating system will attempt to increase performance by locking the files and caching information locally. When this occurs, the application is unable to properly function, which results in an Access Denied error message being displayed during network operations.
All Windows operating systems in the NT family that act as database servers for data files (meaning that data files are stored there and accessed by other Windows PCs) may need to have opportunistic locking disabled in order to minimize the risk of data file corruption. This includes Windows 9x/Me, Windows NT, Windows 200x and Windows XP.
If you are using a Windows NT family workstation in place of a server, you must also disable opportunistic locking (oplocks) on that workstation. For example, if you use a PC with the Windows NT Workstation operating system instead of Windows NT Server, and you have data files located on it that are accessed from other Windows PCs, you may need to disable oplocks on that system.
The major difference is the location in the Windows registry where the values for disabling oplocks are entered. Instead of the LanManServer location, the LanManWorkstation location may be used.
You can verify (or change or add, if necessary) this Registry value using the Windows Registry Editor. When you change this registry value, you will have to reboot the PC to ensure that the new setting goes into effect.
The location of the client registry entry for opportunistic locking has changed in Windows 2000 from the earlier location in Microsoft Windows NT.
Windows 2000 will still respect the EnableOplocks registry value used to disable oplocks in earlier versions of Windows.
You can also deny the granting of opportunistic locks by changing the following registry entries:
HKEY_LOCAL_MACHINE\System\ CurrentControlSet\Services\MRXSmb\Parameters\ OplocksDisabled REG_DWORD 0 or 1 Default: 0 (not disabled)
The OplocksDisabled registry value configures Windows clients to either request or not request opportunistic locks on a remote file. To disable oplocks, the value of OplocksDisabled must be set to 1.
HKEY_LOCAL_MACHINE\System\ CurrentControlSet\Services\LanmanServer\Parameters EnableOplocks REG_DWORD 0 or 1 Default: 1 (Enabled by Default) EnableOpLockForceClose REG_DWORD 0 or 1 Default: 0 (Disabled by Default)
The EnableOplocks value configures Windows-based servers (including Workstations sharing files) to allow or deny opportunistic locks on local files.
To force closure of open oplocks on close or program exit EnableOpLockForceClose must be set to 1.
An illustration of how level II oplocks work:
Station 1 opens the file, requesting oplock.
Since no other station has the file open, the server grants station 1 exclusive oplock.
Station 2 opens the file, requesting oplock.
Since station 1 has not yet written to the file, the server asks station 1 to Break to Level II Oplock.
Station 1 complies by flushing locally buffered lock information to the server.
Station 1 informs the server that it has Broken to Level II Oplock (alternatively, station 1 could have closed the file).
The server responds to station 2's open request, granting it level II oplock. Other stations can likewise open the file and obtain level II oplock.
Station 2 (or any station that has the file open) sends a write request SMB. The server returns the write response.
The server asks all stations that have the file open to Break to None, meaning no station holds any oplock on the file. Because the workstations can have no cached writes or locks at this point, they need not respond to the break-to-none advisory; all they need do is invalidate locally cashed read-ahead data.
\HKEY_LOCAL_MACHINE\System\ CurrentControlSet\Services\LanmanWorkstation\Parameters UseOpportunisticLocking REG_DWORD 0 or 1 Default: 1 (true)
Indicates whether the redirector should use opportunistic-locking (oplock) performance enhancement. This parameter should be disabled only to isolate problems.
\HKEY_LOCAL_MACHINE\System\ CurrentControlSet\Services\LanmanServer\Parameters EnableOplocks REG_DWORD 0 or 1 Default: 1 (true)
Specifies whether the server allows clients to use oplocks on files. Oplocks are a significant performance enhancement, but have the potential to cause lost cached data on some networks, particularly wide-area networks.
MinLinkThroughput REG_DWORD 0 to infinite bytes per second Default: 0
Specifies the minimum link throughput allowed by the server before it disables raw and opportunistic locks for this connection.
MaxLinkDelay REG_DWORD 0 to 100,000 seconds Default: 60
Specifies the maximum time allowed for a link delay. If delays exceed this number, the server disables raw I/O and opportunistic locking for this connection.
OplockBreakWait REG_DWORD 10 to 180 seconds Default: 35
Specifies the time that the server waits for a client to respond to an oplock break request. Smaller values can allow detection of crashed clients more quickly but can potentially cause loss of cached data.
If you have applied all of the settings discussed in this paper but data corruption problems and other symptoms persist, here are some additional things to check out:
We have credible reports from developers that faulty network hardware, such as a single faulty network card, can cause symptoms similar to read caching and data corruption. If you see persistent data corruption even after repeated reindexing, you may have to rebuild the data files in question. This involves creating a new data file with the same definition as the file to be rebuilt and transferring the data from the old file to the new one. There are several known methods for doing this that can be found in our Knowledge Base.
In some sites locking problems surface as soon as a server is installed, in other sites locking problems may not surface for a long time. Almost without exception, when a locking problem does surface it will cause embarrassment and potential data corruption.
Over the past few years there have been a number of complaints on the samba mailing lists that have claimed that samba caused data corruption. Three causes have been identified so far:
Incorrect configuration of opportunistic locking (incompatible with the application being used. This is a VERY common problem even where MS Windows NT4 or MS Windows 200x based servers were in use. It is imperative that the software application vendors' instructions for configuration of file locking should be followed. If in doubt, disable oplocks on both the server and the client. Disabling of all forms of file caching on the MS Windows client may be necessary also.
Defective network cards, cables, or HUBs / Switched. This is generally a more prevalent factor with low cost networking hardware, though occasionally there have been problems with incompatibilities in more up market hardware also.
There have been some random reports of samba log files being written over data files. This has been reported by very few sites (about 5 in the past 3 years) and all attempts to reproduce the problem have failed. The Samba-Team has been unable to catch this happening and thus has NOT been able to isolate any particular cause. Considering the millions of systems that use samba, for the sites that have been affected by this as well as for the Samba-Team this is a frustrating and a vexing challenge. If you see this type of thing happening please create a bug report on https://bugzilla.samba.org without delay. Make sure that you give as much information as you possibly can to help isolate the cause and to allow reproduction of the problem (an essential step in problem isolation and correction).
You may want to check for an updated version of this white paper on our Web site from time to time. Many of our white papers are updated as information changes. For those papers, the Last Edited date is always at the top of the paper.
Section of the Microsoft MSDN Library on opportunistic locking:
Opportunistic Locks, Microsoft Developer Network (MSDN), Windows Development > Windows Base Services > Files and I/O > SDK Documentation > File Storage > File Systems > About File Systems > Opportunistic Locks, Microsoft Corporation. http://msdn.microsoft.com/library/en-us/fileio/storage_5yk3.asp
Microsoft Knowledge Base Article Q224992 "Maintaining Transactional Integrity with OPLOCKS", Microsoft Corporation, April 1999, http://support.microsoft.com/default.aspx?scid=kb;en-us;Q224992.
Microsoft Knowledge Base Article Q296264 "Configuring Opportunistic Locking in Windows 2000", Microsoft Corporation, April 2001, http://support.microsoft.com/default.aspx?scid=kb;en-us;Q296264.
Microsoft Knowledge Base Article Q129202 "PC Ext: Explanation of Opportunistic Locking on Windows NT", Microsoft Corporation, April 1995, http://support.microsoft.com/default.aspx?scid=kb;en-us;Q129202.