A few notes relating to working with the linux memory.
Contents
Understanding Linux Memory
See: http://www.kernel.org/doc/gorman/
This is: a mirror of Mel Gorman's book "Understanding the Linux Virtual Memory Manager".
Clearing caches : /proc/sys/vm/drop_caches
It's possible to directly clear caches (pagecache, dentries, inodes) by using /proc:
drop_caches Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache: echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches As this is a non-destructive operation and dirty objects are not freeable, the user should run `sync' first.
See: http://www.kernel.org/doc/Documentation/sysctl/vm.txt
I've found this useful for testing file IO intensive applications (where I want to be sure that the application is reading from disk rather than from pagecache). It's also potentially useful for clearing out the dentries and inodes after (or during) running an application that opens many many files (because the kernel caches these in case they are used again and releases them back from memory gradually). See also my comments on vfs_cache_pressure in this page).
Reducing inode/dentry time in the cache: /proc/sys/vm/vfs_cache_pressure
Controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects. At the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.
See: http://www.kernel.org/doc/Documentation/sysctl/vm.txt
I've seen this discussed relating to slocate (https://lkml.org/lkml/2006/8/3/173, and see some useful pointers later in the thread: https://lkml.org/lkml/2006/8/4/111) but it could be effective on any server where an application which reads over a vast number of files only once causes slab_cache to grow rapidly and consume a large amount of memory that you know can be safely released. This could cause performance problems if other applications are actively using contents of the cache that now need to be reallocated.
What's in the slab?
Frequently used objects are cached in the slab cache (such as the inodes and dentries mentioned above) - detailed information can be found in: /proc/slabinfo
See documentation: http://www.kernel.org/doc/man-pages/online/pages/man5/slabinfo.5.html
pdflush and the linux page cache
The problem: A system is running perfectly well much of the time but has periods of absolute lock-up with messages like "INFO: task cron:4741 blocked for more than 120 seconds.". During these times activity on the system grinds to a halt and activities like ssh logins fail/time-out.
Worth a try is to check and adjust the LinuxIoScheduler - though the effectiveness of this approach is difficult to find and seems to provide a more subtle performance change.
Looking at the system during problem periods we have the pdflush appearing quite often, and looking at system monitoring tools (for example http://munin-monitoring.org/) the size of the buffered/dirty cached data is large and/or fluctuating.
Another place to look is to adjust the behaviour of pdflush - the kernel daemon responsible for pushing cached disk writes back to disk. Here Gregory Smith's article "The Linux Page Cache and pdflush: Theory of Operation and Tuning for Write-Heavy Loads" has a lot of good ideas. Primarily the points on tuning recommendations for write-heavy operations. Of these points the main one is that dirty_background_ratio should be the focus - a setting that allows the amount of dirty cached data kept in memory to be tuned.
Main references:
Other External References
A great decription of chasing down a memory leak in Node.JS, starting with use of pmap to check the overall way memory is being allocated by a proces.
http://www.joyent.com/blog/walmart-node-js-memory-leak