Recently I came across a user who requested an increase to the ulimit settings for nfsd kernel processes.
1 2 3 4 | root 1122 0.0 0.0 0 0 ? S 11:43 0:00 [nfsd] # grep 'open file' /proc/1122/limits Max open files 1024 4096 files |
This appears to default to 1024/4096 soft/hard.
As you can see from the brackets surrounding nfsd, this is a kernel process spawned from kthreadd and thus won’t inherit limits from systemd (or limits.conf)
I decided to throw together a quick C++ program proving that these limits do not impact how many open files a client can utilize.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | #include <iostream> #include <fstream> #include <dirent.h> #include <chrono> #include <thread> #include<unistd.h> using namespace std; int main() { DIR *dir; struct dirent *entry; string filename; dir = opendir(path); std::fstream fs[8194]; int count = 0; chdir("/export"); while ((entry = readdir(dir)) != NULL) { printf(" %s\n", entry->d_name); fs[count].open(entry->d_name); count++; } std::this_thread::sleep_for(std::chrono::milliseconds(100000)); closedir(dir); return 0; } |
On the NFS server in question, I created 8192 files.
1 | [root@nfs export]# for x in {1..8192}; do touch $x; done |
I also ensured that only 1 [nfsd] thread was running (to rule out the open files being split between multiple nfsd threads).
On the client I made sure the user had appropriate ulimit settings
1 2 | # ulimit -n 9000 |
Then I ran the above program to hold open all 8192 files. As you can see below, there was no problem doing so.
1 2 | # lsof +D /export/ | wc -l 8191 |
Tested with NFSv3 (with lockd) and NFSv4.
Conclusion: The [nfsd] limits shown in /proc has no impact on the nfs clients.