nfsd open file limits

Recently I came across a user who requested an increase to the ulimit settings for nfsd kernel processes.

root      1122  0.0  0.0      0     0 ?        S    11:43   0:00 [nfsd]

# grep 'open file' /proc/1122/limits
Max open files            1024                 4096                 files

This appears to default to 1024/4096 soft/hard.

As you can see from the brackets surrounding nfsd, this is a kernel process spawned from kthreadd and thus won’t inherit limits from systemd (or limits.conf)

I decided to throw together a quick C++ program proving that these limits do not impact how many open files a client can utilize.

#include <iostream>
#include <fstream>
#include <dirent.h>
#include <chrono>
#include <thread>
#include<unistd.h>

using namespace std;
int main() {
        DIR *dir;
        struct dirent *entry;
        string filename;

        dir = opendir(path);
        std::fstream fs[8194];
        int count = 0;

        chdir("/export");

        while ((entry = readdir(dir)) != NULL) {
          printf("  %s\n", entry->d_name);
          fs[count].open(entry->d_name);
          count++;
        }
        std::this_thread::sleep_for(std::chrono::milliseconds(100000));
        closedir(dir);
        return 0;
}

On the NFS server in question, I created 8192 files.

[root@nfs export]# for x in {1..8192}; do touch $x; done

I also ensured that only 1 [nfsd] thread was running (to rule out the open files being split between multiple nfsd threads).

On the client I made sure the user had appropriate ulimit settings

# ulimit -n
9000

Then I ran the above program to hold open all 8192 files. As you can see below, there was no problem doing so.

# lsof +D /export/ | wc -l
8191

Tested with NFSv3 (with lockd) and NFSv4.

Conclusion: The [nfsd] limits shown in /proc has no impact on the nfs clients.

Bluetooth headset with Qubes

NOTE: This was done with a bluetooth USB adapter. If you use your wireless card’s built in bluetooth you should be able to do the same, you’ll just need to do it on sys-net instead of a seperate qube

First, create a fedora ‘bluetooth’ qube that we will attach the USB adapter to

Install required packages:

# dnf install blueman udev-x11 

Add the following to /etc/pulse/qubes-default.pa where 10.137.0.0/24 is your qube network (if different)

load-module module-bluetooth-discover
load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1,10.137.0.0/24 auth-anonymous=1

Add user to audio group

# usermod -a -G audio user

Create /etc/systemd/user/pulseaudio.service

 [Unit]
 After=sound.target network.target avahi-daemon.service
 Requires=sound.target
 Wants=avahi-daemon.service
 Description=PulseAudio Sound System

 [Service]
 Type=dbus
 BusName=org.pulseaudio.Server
 BusName=org.PulseAudio1
 ExecStart=/usr/bin/pulseaudio -vv
 ExecStop=/usr/bin/pulseaudio --kill
 Restart=always 

 [Install]
 WantedBy=default.target 

Reload systemd (or just reboot)

# systemctl daemon-reload 

As user, enable it so pulseaudio is running at startup

# systemctl --user enable pulseaudio.service 

Create a script to handle the blueman-applet in /root/bluetooth.sh

#!/bin/bash
while [ true ]; do
   sudo -u user blueman-applet
   sleep 1
done

Make it executable

# chmod +x /root/bluetooth.sh

Add the following to /rw/config/rc.local

iptables -I INPUT -s <CLIENT IP> -j ACCEPT
/root/bluetooth.sh &

Add firewall rule on sys-firewall qube in /rw/config/qubes-firewall-user-script

iptables -I FORWARD 2 -s <CLIENT IP> -d <BLUETOOTH IP> -j ACCEPT

On each client, add the following to /etc/profile to ensure your applications use your bluetooth qube for audio

export PULSE_SERVER=<BLUETOOTH IP>

Now when you attach the USB bluetooth adapter to the bluetooth qube the applet should appear and you’re good to go.

WordPress mod_proxy tips and tricks

We have an Apache server using mod_proxy to serve WordPress from another server. SSL is terminated on the Apache side.

Apache(80) -> WordPress(80)
Apache(443) -> WordPress(80)

.htaccess


RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{SERVER_NAME}/$1 [R,L]

wp-config.php

At the top of the file:


define('FORCE_SSL_ADMIN', true);
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
$_SERVER['HTTPS']='on';

plex mod_proxy (Proxy plex through Apache)

The below vhost assumes you’re using letsencrypt, just replace domain.fqdn with your hostname and PLEX-IP-HOST with the IP/hostname of your plex server. This is useful for connections that block odd ports like 32400 or only allow HTTP/HTTPS.

<VirtualHost *:80>
    ServerName plex.domain.fqdn
    Redirect / https://plex.domain.fqdn/

   ErrorLog  ${APACHE_LOG_DIR}/plex_error.log
   CustomLog  ${APACHE_LOG_DIR}/plex.log combined

    <Location />
        Order allow,deny
        Allow from all
    </Location>
</VirtualHost>

<VirtualHost *:443>
    ServerName plex.domain.fqdn
    ProxyRequests Off
    ProxyPreserveHost On

SSLProxyEngine On

SetEnv newrelic_appname "http-plex"
php_value newrelic.appname "http-plex"


   ErrorLog  ${APACHE_LOG_DIR}/plex_error.log
   CustomLog  ${APACHE_LOG_DIR}/plex.log combined


SSLEngine on
        SSLCertificateFile /etc/letsencrypt/live/plex.domain.fqdn/cert.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/plex.domain.fqdn/privkey.pem
        SSLCertificateChainFile /etc/letsencrypt/live/plex.domain.fqdn/fullchain.pem

    <Proxy *>
        Order deny,allow
        Allow from all
    </Proxy>
    ProxyPass / http://PLEX-IP-HOST:32400/
    ProxyPassReverse / http://PLEX-IP-HOST:32400/

    <Location />
        Order allow,deny
        Allow from all
    </Location>
</VirtualHost>

Apache + PHP-FPM on CentOS 6

Note: This assumes you have enabled the IUS repo (ius.io) for php 5.6. Steps should be the same no matter what version of php-fpm you use.

Install required packages


# yum install httpd mod_ssl php56u-fpm mod_proxy_fcgi

# chkconfig httpd on &&chkconfig php-fpm on

Edit php-fpm configuration

/etc/php-fpm.d/www.conf


listen = 127.0.0.1:9000

listen.owner = apache

listen.group = apache

listen.mode = 0660

user = apache

group = apache

Create /etc/httpd/conf.d/proxy.conf


DirectoryIndex index.php

<Proxy "*">

Order allow,deny

Aloow from all

</Proxy>

ProxyRequests Off

ProxyPreserveHost On

ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/html/$1

 

Start services


# service php-fpm start

# service httpd start

 

Increase stripe_cache_size for mdadm/md devices permanently

Create /etc/udev/rules.d/60-md-stripe-cache.rules


SUBSYSTEM=="block", KERNEL=="md*", ACTION=="change", TEST=="md/stripe_cache_size", ATTR{md/stripe_cache_size}="16384"

Reload udev rules, will take effect immediately.


udevadm control --reload-rules

udevadm trigger

Confirm (where md0 is your md device in question)


cat /sys/devices/virtual/block/md0/md/stripe_cache_size

ESXi 5.5 and Teaming on Cisco Switches (VLANs)

 

Create a basic port channel


interface Port-channel1
description ESX Port Channel
switchport mode trunk

interface GigabitEthernet0/8
description ESX team
channel-group 1 mode on
spanning-tree portfast trunk

interface GigabitEthernet0/9
description ESX team
channel-group 1 mode on
spanning-tree portfast trunk

Configure NIC teaming (Configuration – Network – Properties – vSwitch Properties) with the following settings


Load Balancing: Route based on IP address

Network Failover Detection: Link status only

Notify Switche: Yes

Failback: No

 

When you’re happy with the results, update your Management network settings to use NIC teaming as well.

 

Raspberry PI Phone Home

This can be done on any systemd based system, this example is on a Raspberry PI running Raspbian (jessie).

# apt-get install autossh

Generate SSH key and copy it over

# ssh-keygen

# ssh-copy-id root@server

Create /etc/systemd/system/autossh.service

 


[Unit]
Description=AutoSSH service
After=network.target

[Service]
User=root
Environment=AUTOSSH_PIDFILE=/tmp/autossh.pid
ExecStart=/usr/bin/autossh -M 0 -f -N -T -q -o ServerAliveInterval=30 -o ServerAliveCountMax=3 -R 9993:localhost:22 root@server
PIDFile=/tmp/autossh.pid
Restart=always

[Install]
WantedBy=multi-user.target

Reload systemd

# systemctl daemon-reload

Enable/Start Service

# systemctl enable autossh

# systemctl start autossh

Now from “server” you should be able to reach your PI

# ssh localhost -p 9993

 

 

HA Linux Mail Server – Part 2 (RHCS Postfix/Dovecot/MySQL)

Now it’s fine to configure our services.

On both nodes install and disable these services (cluster services will manage starting/stopping)


yum install postfix mysqld dovecot

chkconfig postfix off

chkconfig mysqld off

chkconfig dovecot off

Copy the required data to our shared glusterfs storage.


cp -rp /var/lib/mysql /mysql/data/

cp -p /etc/my.cnf /mysql/data

cp -rp /etc/postfix /mail/data

cp -rp /etc/dovecot /mail/dovecot

cp -p /etc/init.d/postfix /mail/data/postfix.sh

cp -p /etc/init.d/dovecot /mail/data/dovecot.sh

mkdir /mail/data/vmail

/mail/data/vmail will hold our user mail, otherwise we are making sure that our configuration files are located on the shared storage so we have a consistent environment.

Update /mysql/data/my.cnf with


datadir=/mysql/data/mysql

Create /mail/data/mail.sh since our cluster service needs to call both postfix and dovecot.


#!/bin/bash
if [ "$1" == "status" ]; then
ps -ef | grep -v grep | grep "/usr/libexec/postfix/master"
exit $?
else
/mail/data/dovecot.sh $1; /mail/data/postfix.sh $1
exit 0
fi

Please note this is a quick and dirty hack. You should have more checks than just master running since we care about dovecot as well.

Now again on both nodes, make some symbolic links to the shared storage for our services.


mv /etc/postfix /etc/postfix.bak

ln -s /mail/data/postfix /etc/postfix

mv /etc/dovecot /etc/dovecot.bak

ln -s /mail/data/dovecot /etc/dovecot

You should now be able to start your services. If you run into any errors, check /var/log/messages or /var/log/cluster/cluster.log


clusvcadm -d postfix-svc

clusvcadm -d mysql-svc

clusvcadm -e postfix-svc

clusvcadm -e mysql-svc

To store users mail in /mail/data/vmail, make the following changes to /etc/postfix/main.cf – in this example we are using LDAP.

Both nodes in this case are replicating LDAP information from another server, so both the main LDAP server and one node could go down and users could still authenticate to the cluster services.

accounts_server_host = localhost
accounts_search_base = dc=example,dc=com
#Assumes users have a mail: attribute, if not use something else
accounts_query_filter = (mail=%u)
#accounts_result_attribute = homeDirectory
accounts_result_attribute = mail
#accounts_result_format  =  %u/Mailbox
accounts_result_format  = /var/vmail/%u/
accounts_scope = sub
accounts_cache = yes
accounts_bind = yes
accounts_bind_dn = cn=admin,dc=example,dc=com
accounts_bind_pw = PASSWORD
accounts_version = 3

virtual_transport = virtual
virtual_uid_maps = static:5000
virtual_gid_maps = static:5000
virtual_mailbox_base = /
virtual_mailbox_maps = ldap:accounts
virtual_mailbox_domains = example.com

For Dovecot, configure LDAP normally and then make the following changes

conf.d/auth-ldap.conf.ext:  args = uid=vmail gid=vmail home=/var/vmail/%u/
conf.d/10-mail.conf:mail_location = maildir:/var/vmail/%u
dovecot.conf:mail_location = maildir:/var/vmail/%u

Once this is done, restart your services (clusvcadm -R servicename) and send some test e-mails to yourself.