systemd-nspawn jail / chroot

This was done on Debian 13, but should work on any modern systemd distro. The main advantage here is that you can enable per-user containers / jails for SSH and rsync.

First let’s create a minimal install with debootstrap

debootstrap --variant=minbase trixie /var/lib/machines/jail http://deb.debian.org/debian

We’ll create the nspawn config next in /etc/systemd/nspawn/jail.nspawn

[Exec]
Boot=yes

[Files]
BindReadOnly=/Videos

Reload systemd and enable the container, making sure we install systemd

systemctl daemon-reload
systemctl enable --now [email protected]
systemd-nspawn -D /var/lib/machines/jail /bin/bash
# apt-get install systemd dbus



Create a minimal wrapper, mostly so we can preserve rsync functionality. We’re using /usr/local/bin/nspawn-ssh-wrapper

#!/bin/bash
set -euo pipefail

MACHINE="jail"
USER="jail"

CMD="${SSH_ORIGINAL_COMMAND:-}"

# Find the container leader PID (host PID of container init)
LEADER="$(/usr/bin/machinectl show "$MACHINE" -p Leader --value)"

if [ -z "$LEADER" ] || [ "$LEADER" = "0" ]; then
  echo "Container $MACHINE not running" >&2
  exit 1
fi

if [ -z "$CMD" ]; then
  # Interactive: machinectl shell is fine (banner doesn't matter)
  exec /usr/bin/machinectl shell "${USER}@${MACHINE}"
else
  # Non-interactive (rsync/scp): MUST be banner-free.
  # Enter namespaces and run the SSH_ORIGINAL_COMMAND as the container user.
  exec /usr/bin/nsenter -t "$LEADER" -a \
    /usr/sbin/runuser -u "$USER" -- /bin/sh -c "$CMD"
fi

In your sshd_config

Match User jail
    ForceCommand /usr/bin/sudo -n --preserve-env=SSH_ORIGINAL_COMMAND /usr/local/bin/nspawn-ssh-wrapper
    PermitTTY yes
    X11Forwarding no
    AllowTcpForwarding no

And in /etc/sudoers

Defaults:jail env_keep += "SSH_ORIGINAL_COMMAND"
jail ALL=(root) NOPASSWD: /usr/local/bin/nspawn-ssh-wrapper

Hauppauge WinTV HVR-950 tuner record

Assuming you're using composite, otherwise 0 for coax
# v4l2-ctl -d /dev/video2 --set-input=1


We need to force the pixel format or you'll get discoloration

# ffmpeg -f v4l2 -channel 1 -input_format uyvy422 -video_size 720x480 -framerate 30000/1001 -i /dev/video2 -f alsa -thread_queue_size 1024 -i hw:1 -pix_fmt yuv420p -c:v libx264 -preset veryfast -crf 18 -c:a aac -b:a 192k /tmp/vhs_fixed.mp4

nfsd open file limits

Recently I came across a user who requested an increase to the ulimit settings for nfsd kernel processes.

root      1122  0.0  0.0      0     0 ?        S    11:43   0:00 [nfsd]

# grep 'open file' /proc/1122/limits
Max open files            1024                 4096                 files

This appears to default to 1024/4096 soft/hard.

As you can see from the brackets surrounding nfsd, this is a kernel process spawned from kthreadd and thus won’t inherit limits from systemd (or limits.conf)

I decided to throw together a quick C++ program proving that these limits do not impact how many open files a client can utilize.

#include <iostream>
#include <fstream>
#include <dirent.h>
#include <chrono>
#include <thread>
#include<unistd.h>

using namespace std;
int main() {
        DIR *dir;
        struct dirent *entry;
        string filename;

        dir = opendir(path);
        std::fstream fs[8194];
        int count = 0;

        chdir("/export");

        while ((entry = readdir(dir)) != NULL) {
          printf("  %s\n", entry->d_name);
          fs[count].open(entry->d_name);
          count++;
        }
        std::this_thread::sleep_for(std::chrono::milliseconds(100000));
        closedir(dir);
        return 0;
}

On the NFS server in question, I created 8192 files.

[root@nfs export]# for x in {1..8192}; do touch $x; done

I also ensured that only 1 [nfsd] thread was running (to rule out the open files being split between multiple nfsd threads).

On the client I made sure the user had appropriate ulimit settings

# ulimit -n
9000

Then I ran the above program to hold open all 8192 files. As you can see below, there was no problem doing so.

# lsof +D /export/ | wc -l
8191

Tested with NFSv3 (with lockd) and NFSv4.

Conclusion: The [nfsd] limits shown in /proc has no impact on the nfs clients.

Bluetooth headset with Qubes

NOTE: This was done with a bluetooth USB adapter. If you use your wireless card’s built in bluetooth you should be able to do the same, you’ll just need to do it on sys-net instead of a seperate qube

First, create a fedora ‘bluetooth’ qube that we will attach the USB adapter to

Install required packages:

# dnf install blueman udev-x11 

Add the following to /etc/pulse/qubes-default.pa where 10.137.0.0/24 is your qube network (if different)

load-module module-bluetooth-discover
load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1,10.137.0.0/24 auth-anonymous=1

Add user to audio group

# usermod -a -G audio user

Create /etc/systemd/user/pulseaudio.service

 [Unit]
 After=sound.target network.target avahi-daemon.service
 Requires=sound.target
 Wants=avahi-daemon.service
 Description=PulseAudio Sound System

 [Service]
 Type=dbus
 BusName=org.pulseaudio.Server
 BusName=org.PulseAudio1
 ExecStart=/usr/bin/pulseaudio -vv
 ExecStop=/usr/bin/pulseaudio --kill
 Restart=always 

 [Install]
 WantedBy=default.target 

Reload systemd (or just reboot)

# systemctl daemon-reload 

As user, enable it so pulseaudio is running at startup

# systemctl --user enable pulseaudio.service 

Create a script to handle the blueman-applet in /root/bluetooth.sh

#!/bin/bash
while [ true ]; do
   sudo -u user blueman-applet
   sleep 1
done

Make it executable

# chmod +x /root/bluetooth.sh

Add the following to /rw/config/rc.local

iptables -I INPUT -s <CLIENT IP> -j ACCEPT
/root/bluetooth.sh &

Add firewall rule on sys-firewall qube in /rw/config/qubes-firewall-user-script

iptables -I FORWARD 2 -s <CLIENT IP> -d <BLUETOOTH IP> -j ACCEPT

On each client, add the following to /etc/profile to ensure your applications use your bluetooth qube for audio

export PULSE_SERVER=<BLUETOOTH IP>

Now when you attach the USB bluetooth adapter to the bluetooth qube the applet should appear and you’re good to go.

WordPress mod_proxy tips and tricks

We have an Apache server using mod_proxy to serve WordPress from another server. SSL is terminated on the Apache side.

Apache(80) -> WordPress(80)
Apache(443) -> WordPress(80)

.htaccess


RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^(.*)$ https://%{SERVER_NAME}/$1 [R,L]

wp-config.php

At the top of the file:


define('FORCE_SSL_ADMIN', true);
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
$_SERVER['HTTPS']='on';

plex mod_proxy (Proxy plex through Apache)

The below vhost assumes you’re using letsencrypt, just replace domain.fqdn with your hostname and PLEX-IP-HOST with the IP/hostname of your plex server. This is useful for connections that block odd ports like 32400 or only allow HTTP/HTTPS.

<VirtualHost *:80>
    ServerName plex.domain.fqdn
    Redirect / https://plex.domain.fqdn/

   ErrorLog  ${APACHE_LOG_DIR}/plex_error.log
   CustomLog  ${APACHE_LOG_DIR}/plex.log combined

    <Location />
        Order allow,deny
        Allow from all
    </Location>
</VirtualHost>

<VirtualHost *:443>
    ServerName plex.domain.fqdn
    ProxyRequests Off
    ProxyPreserveHost On

SSLProxyEngine On

SetEnv newrelic_appname "http-plex"
php_value newrelic.appname "http-plex"


   ErrorLog  ${APACHE_LOG_DIR}/plex_error.log
   CustomLog  ${APACHE_LOG_DIR}/plex.log combined


SSLEngine on
        SSLCertificateFile /etc/letsencrypt/live/plex.domain.fqdn/cert.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/plex.domain.fqdn/privkey.pem
        SSLCertificateChainFile /etc/letsencrypt/live/plex.domain.fqdn/fullchain.pem

    <Proxy *>
        Order deny,allow
        Allow from all
    </Proxy>
    ProxyPass / http://PLEX-IP-HOST:32400/
    ProxyPassReverse / http://PLEX-IP-HOST:32400/

    <Location />
        Order allow,deny
        Allow from all
    </Location>
</VirtualHost>

Apache + PHP-FPM on CentOS 6

Note: This assumes you have enabled the IUS repo (ius.io) for php 5.6. Steps should be the same no matter what version of php-fpm you use.

Install required packages


# yum install httpd mod_ssl php56u-fpm mod_proxy_fcgi

# chkconfig httpd on &&chkconfig php-fpm on

Edit php-fpm configuration

/etc/php-fpm.d/www.conf


listen = 127.0.0.1:9000

listen.owner = apache

listen.group = apache

listen.mode = 0660

user = apache

group = apache

Create /etc/httpd/conf.d/proxy.conf


DirectoryIndex index.php

<Proxy "*">

Order allow,deny

Aloow from all

</Proxy>

ProxyRequests Off

ProxyPreserveHost On

ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/html/$1

 

Start services


# service php-fpm start

# service httpd start

 

Increase stripe_cache_size for mdadm/md devices permanently

Create /etc/udev/rules.d/60-md-stripe-cache.rules


SUBSYSTEM=="block", KERNEL=="md*", ACTION=="change", TEST=="md/stripe_cache_size", ATTR{md/stripe_cache_size}="16384"

Reload udev rules, will take effect immediately.


udevadm control --reload-rules

udevadm trigger

Confirm (where md0 is your md device in question)


cat /sys/devices/virtual/block/md0/md/stripe_cache_size

ESXi 5.5 and Teaming on Cisco Switches (VLANs)

 

Create a basic port channel


interface Port-channel1
description ESX Port Channel
switchport mode trunk

interface GigabitEthernet0/8
description ESX team
channel-group 1 mode on
spanning-tree portfast trunk

interface GigabitEthernet0/9
description ESX team
channel-group 1 mode on
spanning-tree portfast trunk

Configure NIC teaming (Configuration – Network – Properties – vSwitch Properties) with the following settings


Load Balancing: Route based on IP address

Network Failover Detection: Link status only

Notify Switche: Yes

Failback: No

 

When you’re happy with the results, update your Management network settings to use NIC teaming as well.