ADUFRAY

I thought it might be fun to put my Late 2006 iMac (iMac6,1) to some use and install the latest version of Ubuntu on it: 18.04 Bionic Beaver. This older model iMac comes with the Core2Duo chipset, which is 64-bit. Unfortunately the Apple EFI on it only supports 32-bit, and so you will likely be unable to use normal install ISOs. You can avoid this problem by skipping EFI altogether and going into normal BIOS booting. Matt Gadient wrote a very useful blog post describing the problem and a few different fixes. I ended up having to burn an actual DVD for the install to work; I could not get either native EFI solution to work.

After installing Ubuntu, there were two things not working out-of-the-box: the AirPort card and the nVidia drivers.

  1. The AirPort card was easy enough to fix and this is well-documented. There is an error in the dmesg output that directs you to the kernel wiki page, but I did not find that particularly useful. As usual, Stack Exchange has a great, thorough answer. If you have network connectivity already via an Ethernet cable, just go ahead and type sudo apt install firmware-b43-installer. The post outlines instructions if you need to do it offline as well. Unplug your Ethernet, reboot, and you should now be able to see your available WiFi networks.

  2. The bigger challege was getting the nVidia driver to work. According to Ubuntu bug #1763648, Canonical will no longer be including support for many older GeForce cards, including the 7300 GT. Thankfully, Seth Forshee has already created a patch! I’m not sure the correct way to patch an Ubuntu PPA, so I manually applied the patches to nVidia’s installer file. There’s actually two patches you’ll need to apply: buildfix_kernel_4.14.patch and buildfix_kernel_4.15.patch

However, before we get to patching and installing the official nVidia binary drivers, we need to disable nouveau, the open source version. To do that, enter these commands:

$ sudo su -
# cat << END > /etc/modprobe.d/disable-nouveau.conf
blacklist nouveau
blacklist vga16fb
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
blacklist amd76_edac
options nouveau modeset=0
END
# update-initramfs -u
# reboot

After the system finishes rebooting, we need to install the necessary build tools:

$ sudo apt install gcc make build-essential gcc-multilib dkms mesa-utils

Now we’re ready for patching. Here is the combined patch file (the aforereferenced patches are “meta” patches, as they’re patches to the package and not the software itself, which is sort of like a Russian nesting doll of software changes). This combined patch can be applied to the directory created using sh NVIDIA-Linux-x86_64-304.137.run -x.

Click here to download nvidia-304.137-bionic-18.04.patch.

Once you have the patch saved into your home directory, you can apply it and begin the installation using these commands:

$ ./NVIDIA-Linux-x86_64-304.137.run -x
$ cd ./NVIDIA-Linux-x86_64-304.137
$ patch -p1 < ~/nvidia-304.137-bionic-18.04.patch
$ sudo ./nvidia-installer

You can ignore the first warning about the preinstall failing, as it is merely a check to make sure you want to do this. Go ahead and view less /usr/lib/nvidia/pre-install if you don’t believe me! The build should complete and you should let it update your config files. Reboot again and then verify you’re using the nVidia driver:

$ lshw -c video 2>&1 | grep driver

You want to make sure you see:

       configuration: driver=nvidia latency=0

And not:

       configuration: driver=nouveau latency=0

I recently upgraded my AT&T modem from the NVG 589 to the Pace 5268AC at the insistence of a technician out to repair my ONT. I had been having some packet loss issues and hoped the upgrade would help (it seemed to). However, I noticed immediately my NTP packets were being filtered. I thought this was odd, but found the following resources:

After studying all of these pages and looking at countless lines of tcpdump output, I confirmed a few things:

Sadly, the Network Time Foundation’s reference implementation of ntpd only permits using an ephemeral source port when using ntpdate and not from ntpd itself. Therefore, while behind an AT&T network, it seems not possible to run ntpd without some kind of middleware piece to mangle your UDP packets (e.g., iptables). This makes things very difficult in a modern home with all the following devices using NTP:

I was able to configure a local NTP server using OpenBSD’s OpenNTPd, which uses ephemeral source ports by default. Most of my devices I was able to reconfigure to use my local time server, however many devices do not expose that configuration to the end user - in particular iOS devices and Apple TVs. In fact, they were the noisiest devices on the network attempting to sync time. In order to resolve those devices, I had to add stub DNS zones to my local DNS. I created replacement zones for:

Once I pointed those hostnames to my local NTP server, everything seemed to resolve itself. Even systems that I cannot change from ntpd to OpenNTPd can now sync time using my local servers, which is pretty great.

I also looked at using Chrony instead of OpenNTPd, but after installation on my FreeBSD server I received this scary message:

Unfortunately, this software has shameful history of several vulnerabilities
previously discovered.  FreeBSD Project cannot guarantee that this spree had
come to an end.  Please type ``pkg delete chrony'' to deinstall the port
if tight security is a concern.

I liked the idea of using Chrony because my system monitoring tool of choice, Check_MK, already had a plugin to monitor Chrony, and as far as I can tell didn’t have one for OpenNTPd. On top of that, Chrony is now the default NTP server in Red Hat Enterprise Linux starting with version 7.

Since I trust OpenBSD’s security track record vs. a group I’ve never heard of and the most alarming message I’ve ever seen installing a port, I stuck with OpenNTPd. I was also able to make quick work of a Check_MK plugin, despite ntpctl’s absolutely terrible output. (Seriously, who designed that?) Here’s the plugin - you just need to add the sample output’s command either in your plugins directory or directly to check_mk_agent itself, prepending it with <<<openntpd>>> of course.

#!/usr/bin/python

# 2017 (c) adufray.com
# bsd license
# openntpd check_mk plugin to fit with ntp checks
# note: only supports servers/peers, not sensors

ntp_default_levels = (10, 200.0, 500.0) # stratum, ms offset

# Example output from agent:
# $ ntpctl -s a | paste - - | nl | egrep -E '^     1[[:space:]]|[*]' | cut -f 2- | sed -e 's/[[:space:]][[:space:]]*/ /g'
# 4/4 peers valid, clock synced, stratum 4
# 69.164.202.202 from pool us.pool.ntp.org * 1 10 3 2s 30s -0.053ms 8.154ms 1.675ms

def inventory_openntpd(info):
    if info[0]:
        return [(None, "ntp_default_levels")]

def check_openntpd(_no_item, params, info):
    if not info[0]:
        yield 2, "No status information, openntpd probably not running"
        return
    if "clock unsynced" in " ".join(info[0]):
        yield 2, "%s" % " ".join(info[0])
        return

    # Prepare parameters
    crit_stratum, warn, crit = params

    # Check offset and stratum, output a few info texsts
    offset = float(info[1][-3][0:-2])
    stratum = int(info[1][-6])

    # Check stratum
    infotext = "stratum %d" % stratum
    if stratum >= crit_stratum:
        yield 2, infotext + " (maximum allowed is %d)" % (crit_stratum - 1)
    else:
        yield 0, infotext

    # Check offset
    status = 0
    infotext = "offset %.4f ms" % offset
    if abs(offset) >= crit:
        status = 2
    elif abs(offset) >= warn:
        status = 1
    if status:
        infotext += " (levels at %.4f/%.4f ms)" % (warn, crit)
    yield status, infotext, [ ("offset", offset, warn, crit, 0, None) ]

    # Show additional information
    if info[1][1] == "from":
       yield 0, "reference: %s" % "/".join(info[1][0:4:3])
    else:
       yield 0, "reference: %s" % "/".join(info[1][0:2])

check_info["openntpd"] = {
   'check_function':          check_openntpd,
   'inventory_function':      inventory_openntpd,
   'service_description':     'NTP Time',
   'has_perfdata':            True,
   'group':                   'ntp_time',
}

I suppose this will be a solid enough solution until Poul-Henning Kamp’s masterpiece, Ntimed, is released!

In the process of installing a new FreeBSD system, I was presented with the question of which mirror to use. Of course, I’m doing a netinstall, so I want the fastest mirror, but how can I find that information?

for i in {1..15}; do
    echo -n "ftp${i}.us.freebsd.org: ";
    ping -qc1 ftp${i}.us.freebsd.org | tail -1;
done

This was nice, but is ultimately only measuring latency. Most of the servers were between 30-40ms, so nothing conclusive. How about measuring actual transfer speed? To do that, I need a smallish file to download from each one. I found a 25 MB file in the CVS-archive directory that suited this nicely.

for i in {1..15}; do
    echo -n "ftp${i}.us.freebsd.org: ";
    curl -o /dev/null -m 30 ftp://ftp${i}.us.freebsd.org/pub/FreeBSD/development/CVS-archive/projcvs-projects-archive.tar.gz 2>&1 | \
        tail -1 | \
        egrep -o '[^[:cntrl:]]+$';
done

The output could be cleaner probably, but all you have to do is check out the 9th column for the duration of the transfer, and whoever scored lowest wins!

I’ve been a long-time user of XBMC, the Xbox Media Center software, since back when it ran on the original Xbox. It’s always impressed what the developers could do with such little horsepower. I remember playing 720p XviDs on the original Xbox, when it was difficult to do it otherwise.

Now, it is just as impressive. You can stream full bitrate Blu-ray images to your TV via LibreELEC/Kodi and a cheap Raspberry Pi 3. Here’s my part list:


Part Price
Raspberry Pi 3 $35.00
16 GB microSDHC Card $5.45
Power Supply $6.65
VC-1 Codec License £1.00
MPEG2 Codec License £2.00
Kodi RPi Case $19.95
Flirc IR USB $14.92
Total ~$86.36

All you truly need is the Raspberry Pi, the microSDHC card, and the power supply (you can use any 2 amp power supply and micro USB cable you have lying around, perhaps an old iPad charger), which puts the required cost at about ~$47. The rest are nice-to-haves.

Once you have your parts, load up the LibreELEC (successor to OpenELEC) Raspberry Pi 3 image. If you buy the extra codec licenses, simply execute the following commands:

# mount -o remount,rw /flash
# cat << END >> /flash/config.txt
decode_MPG2=0xbeefcafe  //replace with your keys
decode_WVC1=0xdeadfade
END
# reboot

When you reboot, SSH back into LibreELEC and type these commands to confirm they’re enabled:

# vcgencmd codec_enabled MPG2
# vcgencmd codec_enabled WVC1

You should see the following output:

MPG2=enabled
WVC1=enabled

After all this, I was able to stream ~35 Mb/s bitrate Blu-ray rips from my Samba share across wired Ethernet. The Raspberry Pi 3 also has built-in WiFi, so you can conceivably stream most things without that extra cable — probably not 30+ Mb/s bitrate though.

Do you run an nginx-based web server? Do you use the http_auth_basic_module module? This post might be for you!

I just set up a new FreeBSD server (blog post forthcoming) and decided to skip the install of httpd, preferring nginx. All the cool kids are using it, so I figured I should as well. One problem: no htpasswd command.

There are several webapps I’ve written over the years for my own personal uses, and they all use HTTP Basic authentication to regulate access. nginx happily supports this, if you have an existing htpasswd-formatted file to use. Unfortunately, you cannot create one since it does not supply a tool to do so.

On FreeBSD I found a simple enough workaround:

$ echo YOUR_PASSWORD | \
  pw useradd YOUR_USERNAME -h 0 -N | \
  awk -F: '{print $1"$"$2}' | \
  awk -F'$' '{print $1":$6$rounds=5000$"$4"$"$5;}'

Thankfully FreeBSD’s (and perhaps your distro’s) crypt(3) supports strong password hashes, the default being SHA512 with 5,000 rounds. When you supply the -N flag to the pw command, it will not actually perform the action, but instead print out the result (incluiing the hashed password, which is what we’re after).

SHA512 with 5,000 rounds (and a random, 32-character salt) is probably more security than anyone actually needs. If we want more than that, though, we have options — other than installing httpd and gaining htpasswd. I wrote a super-simple PHP script to generate a bcrypt hash with selectable cost (2^x rounds):

bcrypt_hash.php [user]

#!/usr/local/bin/php
<?php
  // 10 is reasonable, but time the output of this script to test what is reasonable for you
  $cost = 10; 

  // write prompt to stderr so we can redirect output with > and only include what we want
  $stderr = fopen('php://stderr', 'w+');
  fwrite($stderr, 'Enter password: ');

  // disable echoing the password to terminal
  system('stty -echo');
  $passwd = stream_get_line(STDIN, 1024, PHP_EOL);
  system('stty echo');

  fwrite($stderr, "\n");

  // at least on FreeBSD you need to replace PHP's bcrypt hash version 2y with OpenBSD's 2b
  // Fixed in secure/lib/libcrypt/crypt-blowfish.c revision 284483 
  // https://svnweb.freebsd.org/base?view=revision&revision=284483
  $hashed = str_replace('$2y$', '$2b$', password_hash($passwd, PASSWORD_BCRYPT, array('cost' => $cost)));
  echo  ( (isset($argv[1])) ? $argv[1] : "user" ) . ":$hashed\n";
?>

FreeBSD’s pw will generate a bcrypt-based hash if you edit /etc/login.conf and add

blf_users:\
    :passwd_format=blf:\
    :tc=default:

Then rebuild the mapping with cap_mkdb /etc/login.conf and add -L blf_users before the -h flag in the aforementioned command. Sadly pw will use the default cost value of 4, which seems pointless to me. Looking over /usr/src/usr.sbin/pw/pw_user.c the pw_pwcrypt() function does not support changing the random salt that is generated to a format specifying the cost value — better to use the above PHP script.

  1. Change size of virtual disk in VMware (increase only)
  2. Rescan the SCSI bus / devices

    # echo "- - -" > /sys/class/scsi_host/host0/scan
    # echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
    # echo 1 > /sys/class/scsi_device/2\:0\:0\:0/device/rescan
    
  3. Destroy & recreate the partition with the new boundary (hopefully you used the last partition on the disk!)

    # fdisk /dev/sda
       p            (print) 
       d            (delete partition)
       3            (third partition)
       n            (new partition)
       p            (a primary partition)
       3            (the third one)
       [return]     (start block default)
       [return]     (end block default)
       w            (write changes to disk)
    
  4. Pray

    # reboot
    
  5. Let BTRFS know

    # btrfs filesystem resize max /
    

Tired of the proprietary, outdated Ventrilo platform for in-game voice services? Me too. Thankfully, now there’s Mumble, an open source, encrypted, cross-platform, low-latency voice chat system. I’ve only been messing around with it for a short time, but so far it’s a huge improvement over Ventrilo and TeamSpeak.

Initially I became frustrated, because the server software, aka “Murmur”, is significantly less friendly. The installation instructions for CentOS were garbage. There’s no EPEL package, much less a base-system package, you have to install someone’s Dropbox-hosted RPMs, a middleware repo, and one of the dependencies is Qt. Obviously the Qt dependency is for the client aspect, but still: gross. No instructions for a non-X11 version.

Thankfully, as a I mentioned, it’s open source. I found a nifty minimalistic server implementation called uMurmur. It’s designed to run on small embedded systems, so it’s very light-weight. It seems to have been built around OpenWRT, the open source router OS. Assuming you already have gcc, autoconf, and make installed, I think the only other requirements are libconfig, protobuf-c, and OpenSSL (or PolarSSL). The whole setup was pretty easy:

# yum install -y libconfig{,-devel} protobuf-c{,-devel}
# curl -O 'https://umurmur.googlecode.com/files/umurmur-0.2.14.tar.gz'
# tar -zxvf umurmur-0.2.14.tar.gz
# cd umurmur-0.2.14
# ./configure --with-ssl=openssl
# make && make install

The Makefile installs a single binary to /usr/local/bin, but no init script or default configuration. Here’s a quick init script I created (it’s sloppy, but it works):

/etc/init.d/umurmurd:

# umurmurd  Minimal Mumble server
#
# chkconfig: 345 20 80
# description: umurmurd is a minimal Mumble server
# processname: umurmurd

# Source function library.
. /etc/init.d/functions

RETVAL=0
prog="umurmurd"
PIDFILE="/var/run/umurmurd.pid"
LOCKFILE="/var/lock/subsys/umurmurd"

start() {
        echo -n "Starting $prog: "
        /usr/local/bin/umurmurd -r -p $PIDFILE
        RETVAL=$?
        [ $RETVAL -eq 0 ] && touch $LOCKFILE && success || failure
        echo
        return $RETVAL
}

stop() {
        echo -n "Shutting down $prog: "
        kill $(cat "$PIDFILE") 2>/dev/null >/dev/null && success || failure
        RETVAL=$?
        [ $RETVAL -eq 0 ] && rm -f $LOCKFILE
        echo
        return $RETVAL
}

case "$1" in
    start)
        start
        ;;
    stop)
        stop
        ;;
    status)
    if [ -f "$PIDFILE" ]; then
          echo -n "PIDFILE exists:";
          ps -p ${PIDFILE}
        else
          echo "not running";
        fi
        ;;
    restart)
        stop
        start
        ;;
    *)
        echo "Usage: $prog {start|stop|status|restart"
        exit 1
        ;;
esac
exit $?

And a very basic config file:

/etc/umurmur.conf:

max_bandwidth = 128000;
welcometext = "Welcome to Murmur!";
certificate = "/etc/pki/tls/certs/umurmur.crt";
private_key = "/etc/pki/tls/private/umurmur.key";
#password = "password_for_users";
admin_password = "admin_password";   # Set to enable admin functionality.
ban_length = 3600;            # Length in seconds for a ban. Default is 0. 0 = forever.
enable_ban = true;        # Default is false
max_users = 20;

# bindport = 64738;
# bindaddr = "192.168.1.1";

channels = ( {
    name = "Root";
    parent = "";
    description = "Root channel. No entry.";
    noenter = true;
  },
  {
    name = "Lobby";
    parent = "Root";
    description = "Lobby channel";
  },
  {
    name = "Silent";
    parent = "Root";
    description = "Silent channel";
    silent = true; # Optional. Default is false
  }
);

default_channel = "Lobby";

Before you exit your terminal, make sure to perform these last few steps:

# openssl genrsa -out /etc/pki/tls/private/umurmur.key 4096
# openssl req -new -x509 -nodes -sha1 -days 3650 -key /etc/pki/tls/private/umumur.key -out /etc/pki/tls/certs/umurmur.crt
# chkconfig --add umurmurd
# chkconfig umurmurd on
# service umurmurd start

And, if you’re running iptables, don’t forget to add the appropriate rules.

# iptables -I INPUT 1 -m state --state NEW -m tcp -p tcp --dport 64738 -j ACCEPT
# iptables -I INPUT 1 -m state --state NEW -m udp -p udp --dport 64738 -j ACCEPT
# service iptables save

I recently wrote about how best to configure one’s SSL using Nginx. Unfortunately I recommended RC4 over many other ciphers because at the time it wasn’t completely broken. That time has come to pass.

Here is my updated configuration, unfortunately dropping SSLv3 entirely (and thus blocking the default configuration for Windows XP on IE6):

ssl_certificate      /path/to/combined.cert.and.ca.crt;
ssl_certificate_key  /path/to/cert.key; (Make sure it's 4096-bits!)
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "!ADH:!MD5:!aNULL:ECDH+AES:DH+AES:@STRENGTH:RSA+AES:3DES";

ssl_dhparam /path/to/strong/dhparam-4096.pem;
add_header Strict-Transport-Security "max-age=31536000";

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/ca.crt;
resolver 8.8.8.8 valid=300s;
resolver_timeout 5s;

Make sure you create your strong DH parameters file (this took 45 minutes on a 2014 Mac Pro):

# openssl dhparam -out /path/to/strong/dhparam-4096.pem 4096

If you don’t care about IE on XP at all, go ahead and drop 3DES at the end of the ssl_ciphers list. That will give you a very strong rating on the Qualys SSL Labs Server Test tool. Also, if forward secrecy is important to you on Java 6u45, go ahead and comment out the ssl_dhparam part.

About the only thing you can do to get a better score at this point is to really cut back on your supported clients.

Here’s what that looks like:

ssl_certificate      /path/to/combined.cert.and.ca.crt;
ssl_certificate_key  /path/to/cert.key; (Make sure it's 4096-bits!)
ssl_protocols TLSv1.2;
ssl_ciphers "!ADH:!MD5:!aNULL:ECDH+AES256:DH+AES256:@STRENGTH:RSA+AES256";

ssl_dhparam /path/to/strong/dhparam-4096.pem;
add_header Strict-Transport-Security "max-age=31536000";

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/ca.crt;
resolver 8.8.8.8 valid=300s;
resolver_timeout 5s;

As of December 24, 2014, disabling TLSv1.0 and TLSv1.1 breaks these clients (according to Qualys), so I don’t recommend it:

Note that this is almost entirely due to dropping TLSv1.0 and not due to the lack of AES-128. By turning back on TLSv1.0 and TLSv1.1 (lowering your Protocol Support score from 100 to 95) you only lose these clients:

So, to summarize, here are my current recommended configs, one to maximize client compatibility and one to maximize (practical) security.

Compatibility:

ssl_certificate      /path/to/combined.cert.and.ca.crt;
ssl_certificate_key  /path/to/cert.key; (Make sure it's 4096-bits!)
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "!ADH:!MD5:!aNULL:ECDH+AES:DH+AES:@STRENGTH:RSA+AES:3DES";

ssl_dhparam /path/to/strong/dhparam-4096.pem;
add_header Strict-Transport-Security "max-age=31536000";

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/ca.crt;
resolver 8.8.8.8 valid=300s;
resolver_timeout 5s;

Security:

ssl_certificate      /path/to/combined.cert.and.ca.crt;
ssl_certificate_key  /path/to/cert.key; (Make sure it's 4096-bits!)
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "!ADH:!MD5:!aNULL:ECDH+AES256:DH+AES256:@STRENGTH:RSA+AES256";

ssl_dhparam /path/to/strong/dhparam-4096.pem;
add_header Strict-Transport-Security "max-age=31536000";

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/ca.crt;
resolver 8.8.8.8 valid=300s;
resolver_timeout 5s;

I expect I won’t have to update this post soon — or at least not until Nginx enables support for hybrid / dual SSL certificates for the same host. (NOT SNI, but rather having an ECC and RSA certificate for the same site. Apache already supports this and Nginx developers discussed it last year.) That or when OpenSSL supports ChaCha20-Poly1305 (this is possible today by using Google’s fork of OpenSSL, BoringSSL, but it requires manually compiling all the software).

Synology Woes

17 Dec 2014

Let’s say you have a Synology NAS for storing all your important data (i.e. Linux ISOs). Let’s say you’re a fairly paranoid person and use Synology Hybrid RAID (SHR) which can tolerate the loss of two drives simultaneously. Just in case. Let’s say you experience some kind of catostrophic failure that causes the system to think it has lost 5 of 12 drives at once.

This is what has happened to me.

To make matters worse, I had installed the bash ipkg to make using the command line more palatable. Did I follow best practices and exec bash from my ash .profile script? No. Do I regret this? Very much.

Since all of my volumes have failed, including /opt where bash is stored, I can no longer SSH into the Synology! And, more importantly: neither can support. If you haven’t created a separate user to SSH in from, this is really bad. The default configuration does not let you exec commands via ssh user@host <command> so you’re pretty much dead in the water.

If you still have an available volume, you can install a third party package that emulates a Terminal in the browser to repair /etc/passwd, but all my volumes have failed.

Just about thinking I was out of luck entirely, I stumbled on the Scheduled Tasks page in the Control Panel. A-ha! A little command-fu and I was able to send the output of commands to my webserver to fix the problems.

On server:
    $ tcpdump -s 0 -A -nni eth0 host <home IP> and port 80

In Scheduled Task:
    <command> | curl -d @- http://<server>/

Now I can execute commands and see the results in my tcpdump output. A little bit of sed and I was all patched up:

sed -i -e 's@/opt/bin/bash@/bin/ash@g' /etc/passwd 2>&1 | curl -d @- http://<server>/

Since the command executed successfully, there was no output and I was free to SSH back into my NAS. And, again, more importantly: so could support! Now, hopefully, they can repair my volumes.

Note: see updated article from 2014-DEC-24.

With all the talk about BEAST, CRIME, and POODLE, I thought it was time to revisit the SSL configuration of my blog. My blog is much easier to test with than a production web stack.

If you haven’t familiarized yourself with the excellent Qualys SSL Labs Server Test tool, do so. It is a great resource and will quickly become invaluable part of your toolkit. I’m a bit surprised they didn’t start rate limiting me after so many tests! Here are the results of my testing, which earned this site an A+ rating.

Note: For any of my recommendations to work, you must be using OpenSSL 1.0.1j (openssl-1.0.1e-30.el6_6.2 on Red Hat Enterprise Linux (RHEL) variants) or later. I also had problems with the version of nginx in EPEL, so I updated to nginx-1.6.2-1 in the official repo.

Certificate

This is probably the easiest part. Google and Microsoft have deprecated certificates signed with SHA1 and are very forcefully recommending administrators reissue their certificates using SHA2 for the signature algorithm. This is really simple and only involves creating a new certificate signing request (CSR) and submitting it. HOWEVER, as we’ll find out later in the Key Exchange section, you’ll want to make sure you’re using a 4096-bit or greater key. The default and most common key size is 2048-bit, so now would be a good time to update.

Generate the key:
$ openssl genrsa -out /etc/pki/tls/private/www.example.com.key 4096

Generate the signing request
$ openssl req -new -key /etc/pki/tls/private/www.example.com.key \
              -out /etc/pki/tls/certs/www.example.com.csr -sha512

Submit the CSR to your certificate authority (CA) for signing and you’ve got the certificate part handled. Make a combined certificate file including your server’s certificate and then any intermediate certificates. Do not include the root CA, though — it’s unnecessary and will generate a warning at SSL labs.

Combine the certificates
$ cat www_example_com.crt intermediate1.crt intermediate2.crt > www.example.com.combined.crt

Add these lines to your nginx config

ssl_certificate /etc/pki/tls/certs/www.example.com.combined.crt;
ssl_certificate_key /etc/pki/tls/private/www.example.com.key;

Protocol Support

This is where we start making some tough decisions. SSLv2 is considered completely broken, you should definitely not be using it. SSLv3 is now essentially broken and it is recommended you disable it. However, if you disable SSLv3, you will block access to your site from legacy systems like Windows XP. Depending on your userbase, this may be a dealbreaker for you. If you want to be very agressive, disable all protocols except TLSv1.2 — however, you’ll be limiting yourself quite a bit more at that point.

Here are the relevant configurations.

Maximum accessibility, least security

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;

Recommended, good mix of both accessibility and security

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Aggressive security

ssl_protocols TLSv1.2;

Key Exchange

Key Exchange is impacted both by your certificate’s key as well as the cipher suites we choose later. Choosing specific ciphers and using a strong key, however, is not enough. We also have to generate some random data beforehand to strengthen the key exchange mechanisms. The Diffie-Hellman parameters take an especially long time (about an hour on the new Mac Pro).

Generate the Diffie-Hellman parameters
$ openssl dhparam -out /etc/pki/tls/certs/dhparams-4096.pem 4096

Then add these to your nginx config

ssl_dhparam /etc/pki/tls/certs/dhparams-4096.pem;
ssl_ecdh_curve secp384r1;

The second line strengthens the elliptic curve algorithms that will be used later.

Cipher Strength

Chosing the right ciphers and, more importantly, in the right order, is slightly tedious, but not particularly difficult. I sorted the ciphers by AES method (AES-GCM is better than AES-CBC, for example), AES key length (256-bit only), hash family (SHA384, SHA256, and then SHA). I also preferred ECDHE over DHE for the key exchange (the EC parameters are stronger than the DH parameters). OpenSSL supports a wide variety of cipher suites. I essentially limited myself to 256-bit length keys which filtered the vast majority out. However, if you also include 128-bit AES and Camellia ciphers, you get a list that looks like this:

ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256) Mac=AEAD
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(256) Mac=AEAD
DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(256) Mac=AEAD

ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128) Mac=AEAD
ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(128) Mac=AEAD

ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA384
ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA384
ECDH-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256)  Mac=SHA384
ECDH-RSA-AES256-SHA384  TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(256)  Mac=SHA384

DHE-RSA-AES256-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA256

ECDH-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128)  Mac=SHA256
ECDH-RSA-AES128-SHA256  TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(128)  Mac=SHA256
ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA256
ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA256
DHE-RSA-AES128-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA256

ECDH-ECDSA-AES256-SHA   SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256)  Mac=SHA1
ECDH-RSA-AES256-SHA     SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(256)  Mac=SHA1
ECDHE-ECDSA-AES256-SHA  SSLv3 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA1
ECDHE-RSA-AES256-SHA    SSLv3 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-CAMELLIA256-SHA SSLv3 Kx=DH       Au=RSA  Enc=Camellia(256) Mac=SHA1

ECDH-ECDSA-AES128-SHA   SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128)  Mac=SHA1
ECDH-RSA-AES128-SHA     SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(128)  Mac=SHA1
ECDHE-ECDSA-AES128-SHA  SSLv3 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA1
ECDHE-RSA-AES128-SHA    SSLv3 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA1
DHE-RSA-AES128-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA1

AES256-GCM-SHA384       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(256) Mac=AEAD
AES128-GCM-SHA256       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(128) Mac=AEAD
AES256-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA256
AES128-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA256
AES256-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA1
CAMELLIA256-SHA         SSLv3 Kx=RSA      Au=RSA  Enc=Camellia(256) Mac=SHA1
AES128-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA1

The ones without ECDHE, ECDH, or DH in the beginning to not use ephemeral keys and thus do NOT support perfect forward secrecy. I removed those from my list, but you could also just move them to the bottom.

If we want to be accessible we need to add RC4 cipher suites to the list. (There is a discussion about the relative strength of RC4 vs 3DES, so perhaps this recommendation will need to be updated)

Ideally we would put RC4 ciphers at the bottom of the list, preferring to use our secure cipher suites listed above. Unfortuantely due to a flaw in SSLv3, not prioritizing RC4 above CBC-based algorithms leave us vulnerable to the POODLE attack. Since the verbose cipher list above tells us which protocol a given cipher suite belongs to, we can move any SSLv3 CBC suite below RC4-SHA. Here’s the order I ended up with:

ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(256) Mac=AEAD
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(256) Mac=AEAD
DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(256) Mac=AEAD

ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD
ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AESGCM(128) Mac=AEAD
ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(128) Mac=AEAD

ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA384
ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA384
ECDH-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256)  Mac=SHA384
ECDH-RSA-AES256-SHA384  TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(256)  Mac=SHA384

DHE-RSA-AES256-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA256

ECDH-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128)  Mac=SHA256
ECDH-RSA-AES128-SHA256  TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(128)  Mac=SHA256
ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA256
ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA256
DHE-RSA-AES128-SHA256   TLSv1.2 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA256

ECDH-ECDSA-RC4-SHA      SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=RC4(128)  Mac=SHA1
ECDH-RSA-RC4-SHA        SSLv3 Kx=ECDH/RSA Au=ECDH Enc=RC4(128)  Mac=SHA1
ECDHE-ECDSA-RC4-SHA     SSLv3 Kx=ECDH     Au=ECDSA Enc=RC4(128)  Mac=SHA1
ECDHE-RSA-RC4-SHA       SSLv3 Kx=ECDH     Au=RSA  Enc=RC4(128)  Mac=SHA1

ECDH-ECDSA-AES256-SHA   SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256)  Mac=SHA1
ECDH-RSA-AES256-SHA     SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(256)  Mac=SHA1
ECDHE-ECDSA-AES256-SHA  SSLv3 Kx=ECDH     Au=ECDSA Enc=AES(256)  Mac=SHA1
ECDHE-RSA-AES256-SHA    SSLv3 Kx=ECDH     Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-RSA-CAMELLIA256-SHA SSLv3 Kx=DH       Au=RSA  Enc=Camellia(256) Mac=SHA1

ECDH-ECDSA-AES128-SHA   SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128)  Mac=SHA1
ECDH-RSA-AES128-SHA     SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(128)  Mac=SHA1
ECDHE-ECDSA-AES128-SHA  SSLv3 Kx=ECDH     Au=ECDSA Enc=AES(128)  Mac=SHA1
ECDHE-RSA-AES128-SHA    SSLv3 Kx=ECDH     Au=RSA  Enc=AES(128)  Mac=SHA1
DHE-RSA-AES128-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA1

AES256-GCM-SHA384       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(256) Mac=AEAD
AES128-GCM-SHA256       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(128) Mac=AEAD

RC4-SHA                 SSLv3 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=SHA1

AES256-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA256
AES128-SHA256           TLSv1.2 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA256
AES256-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA1
CAMELLIA256-SHA         SSLv3 Kx=RSA      Au=RSA  Enc=Camellia(256) Mac=SHA1
AES128-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA1

Summary

Here are the nginx configurations we have ended up with. Both will get ‘A+’ ratings (if you use Strict Transport Security, optional — ‘A’ otherwise), the latter of the two will get a 100% on all sections.

Modestly secure, while supporting all browsers and SSLv3:

ssl on;

ssl_certificate /etc/pki/tls/certs/www.example.com.combined.crt;
ssl_certificate_key /etc/pki/tls/private/www.example.com.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDH-ECDSA-AES128-SHA256:ECDH-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDH-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDHE-RSA-RC4-SHA:ECDH-ECDSA-AES256-SHA:ECDH-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA256-SHA:ECDH-ECDSA-AES128-SHA:ECDH-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:RC4-SHA:AES256-SHA256:AES128-SHA256:AES256-SHA:CAMELLIA256-SHA:AES128-SHA;

ssl_ecdh_curve secp384r1;

# only enable this if you run an SSL-only site!
add_header Strict-Transport-Security "max-age=31536000";

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

Aggressively secure, not worrying so much about legacy clients (preffered, but by enabling TLSv1 and TLSv1.1 you can reach many more clients):

ssl on;

ssl_certificate /etc/pki/tls/certs/www.example.com.combined.crt;
ssl_certificate_key /etc/pki/tls/private/www.example.com.key;
ssl_protocols TLSv1.2;
# strong ciphers, 256 only, only with foward secrecy (breaks winxp, java, old android)
ssl_ciphers ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDH-ECDSA-AES256-SHA:ECDH-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA256-SHA;
ssl_dhparam /etc/pki/tls/certs/dhparam-4096.pem;
ssl_ecdh_curve secp384r1;

# only enable this if you run an SSL-only site!
add_header Strict-Transport-Security "max-age=31536000";

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

Note: see updated article from 2014-DEC-24.

I’ve been using Nagios for the better part of a decade. It’s an incredibly powerful monitoring platform that’s highly extensible. About 4 or so years ago I discovered an excellent replacement for Nagios’s (outdated) interface, NPCD, and many other Nagios plugins: Check_MK. Check_MK is very light-weight and scales excellently. It can also be made to run entirely over SSH, which makes dealing with corporate firewalls a piece of cake.

Setting up a basic installation is fairly straight-forward and covered thoroughly in Check_MK’s online documentation. I’m not going to bore you with it. What I found less-intuitive was configuring the distriubuted single-pane-of-glass interface known as Multisite. Multisite lets you consolidate several Nagios+Check_MK installations into a single view. Unfortunately, by default it’s all over cleartext and requires you to expose xinetd to the world - gross. Getting Multisite to tunnel over SSH is not difficult, but also not documented. I found several threads in the check_mk mailing list where users asked how to do it, but no one ever had a solution. Here’s mine.

Prerequisites

Start with two CentOS systems, built using the Minimal package group. One system, we’ll call it master.example.com, is going to be your single-pane-of-glass. The second system, slave.example.com, will be your remote site you want to view.

As of CentOS 6, the mod_python package is no longer included in base, which means EPEL is required. If you’re installing this on RHEL 6, don’t forget to subscribe to the rhel-x86_64-server-optional-6 repo.

# curl -O http://[mirror]/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
# yum localinstall epel-release-6-8.noarch.rpm

Next, let’s install the required packages.

# yum install -y gcc gcc-c++ man make httpd gd-devel perl wget   \
                 samba-client postgresql-devel openssh-clients   \
                 openldap-devel net-snmp net-snmp-utils          \
                 bind-utils mysql mysql-devel rpcbind mod_python \
                 mod_ssl php rrdtool-perl perl-Time-HiRes php-gd 

If you want Perl Net::SNMP checks, RADIUS checks, and fping, you’ll also need to install these optional packages:

# yum install -y perl-Net-SNMP radiusclient-ng-devel fping 

Monitoring Software Installation

Once you’ve got all the required software, go ahead and download the source packages for Nagios, nagios-plugins, check_mk, and pnp4nagios:

nagios-3.5.1.tar.gz - Skip to download

nagios-plugins-2.0.3.tar.gz - Handy to have

check_mk-1.2.5i5p2.tar.gz - Live dangerously: get the innovation release

pnp4nagios-0.6.24.tar.gz - Pretty graphs

I configure each in the above order, and I install them all into /usr/local/, which is probably a holdover from my long-time romance with FreeBSD. We’ll start with Nagios:

First create the Nagios user. For some reason this has been broken in the source package for several years, and no one has bothered to fix it.

# useradd -r -d /var/log/nagios -s /bin/sh -G apache -c "nagios" nagios

Then the standard unpack, configure, and make commands, plus a bunch of extra makes:

# tar -zxvf nagios-3.5.1.tar.gz
# cd nagios
# ./configure
# make all
# make install
# make install-init
# make install-commandmode
# make install-config
# make install-webconf
# make install-exfoliation

Nagios-plugins is pretty simple, just watch out for any plugins that get skipped due to dependencies, just in case you actually want them.

# tar -zxvf nagios-plugins-2.0.3.tar.gz
# cd nagios-plugins-2.0.3
# ./configure
# make
# make install

Check_MK is a little bit different. It comes with a setup.sh script that walks you through the configuration directories, then compiles itself and performs the install. Here are the answers I use to get everything installed under /usr/local/check_mk/:

# tar -zxvf check_mk-1.2.5i5p2.tar.gz
# cd check_mk-1.2.5i5p2
# ./setup.sh

Executable programs             /usr/local/bin
Check_MK configuration          /usr/local/check_mk/etc
Check_MK software               /usr/local/check_mk
documentation                   /usr/local/check_mk/doc
check manuals                   /usr/local/check_mk/doc/checks
working directory of Check_MK   /usr/local/check_mk/var/lib
extensions for agents           /usr/local/check_mk
configuration dir for agents    /usr/local/check_mk/etc
Name of Nagios user             nagios
User of Apache process          apache
Common group of Nagios+Apache   nagios
Nagios binary                   /usr/local/nagios/bin/nagios
Nagios main configuration file  /usr/local/nagios/etc/nagios.cfg
Nagios object directory         /usr/local/nagios/etc/check_mk.d
Nagios startskript              /etc/init.d/nagios
Nagios command pipe             /usr/local/nagios/var/rw/nagios.cmd
Check results directory         /usr/local/nagios/var/spool/checkresults
Nagios status file              /usr/local/nagios/var/status.dat
Path to check_icmp              /usr/local/nagios/libexec/check_icmp
URL Prefix for Web addons       /[SITE NAME]/    !! CHANGE THIS TO YOUR SITE NAME !!
Apache config dir               /etc/httpd/conf.d
HTTP authentication file        /usr/local/nagios/etc/htpasswd.users
HTTP AuthName                   Nagios Access
PNP4Nagios templates            /usr/local/pnp4nagios/share/templates
RRD files                       /usr/local/check_mk/pnp-rraconf
rrdcached socket                /tmp/rrdcached.sock
compile livestatus module       yes
Nagios / Icinga version         3.5.1
check_mk's binary modules       /usr/local/check_mk/lib
Unix socket for Livestatus      /usr/local/nagios/var/rw/live
Backends for other systems      /usr/local/check_mk/share/livestatus
Install Event Console           no

Pay very close attention to the URL Prefix for Web addons configuration value. This is going to be the key value for your site name in later parts of the Multisite configuration. To keep with my example, my two installations will use master and slave as the values here.

Lastly, we set up PNP4Nagios, which gives us the awesome RRD graphs for all our services. It also users the same site prefix, so be mindful:

# tar -zxvf pnp4nagios-0.6.24.tar.gz
# cd pnp4nagios-0.6.19
# ./configure --with-base-url=/[SITE NAME]/pnp4nagios
# make all
# make fullinstall    

Configuration Files

Now, let’s get configuring! For brevity, I’ve summarized my changes below:

/usr/local/nagios/etc/nagios.cfg:
    # Comment out the localhost config:
    #cfg_file=/usr/local/nagios/etc/objects/localhost.cfg

    cfg_dir=/usr/local/nagios/etc/check_mk.d
    broker_module=/usr/local/check_mk/lib/livestatus.o /usr/local/nagios/var/rw/live
    broker_module=/usr/local/pnp4nagios/lib/npcdmod.o config_file=/usr/local/pnp4nagios/etc/npcd.cfg
    use_syslog=0
    check_for_updates=0
    process_performance_data=1
    admin_email=root@localhost
    admin_pager=root@localhost

You’ll have to read Check_MK’s online documentation to understand what each option is doing, but this is a pretty good starter config. Each system will monitor hosts only accessible by it. For example, if your master site was local to you in San Francisco, it might monitor all your California resources and not have access to resources in your remote office branch in New York (thus the slave server).

Check_MK supports all kinds of configuration options, most of which rely on the tags defined in the all_hosts variable. You can set a different data-aquisition method for all kinds of systems (in my example, executing the local Check_MK agent directly & remotely via SSH). You could also use SNMP or UNIX sockets.

/usr/local/check_mk/etc/main.mk:
    all_hosts = [ 
      'master.example.com|local',
      'host-monitored-by-master.example.com|ssh',
    ]

    datasource_programs = [
      ( "/usr/bin/sudo /usr/local/check_mk/agents/check_mk_agent.linux", [ 'local' ], ALL_HOSTS ),
      ( "ssh -i ~nagios/.ssh/id_rsa nagios@<IP> sudo /usr/local/bin/check_mk_agent", [ 'ssh' ], ALL_HOSTS ),
    ]

    ipaddresses = {
      "master.example.com" : "127.0.0.1",
    }

    extra_service_conf["normal_check_interval"] = [
      ( '5', ALL_HOSTS, [ "" ] ),
    ]

    extra_host_conf["max_check_attempts"] = [
      ( '3', ALL_HOSTS ),
    ]

System Configuration and Cleanup

That’s basically it for configuration files to get started, now let’s configure the systems themselves.

Enable services to start at boot:

# chkconfig httpd on
# chkconfig nagios on
# chkconfig npcd on

Fix various SELinux (read: disable) and permissions:

# mkdir /usr/local/check_mk/var/lib/web/admin
# chown apache:nagios /usr/local/check_mk/var/lib/web/admin
# chmod 770 /usr/local/check_mk/var/lib/web/admin
# setenforce 0
# echo 'SELINUX=permissive' >> /etc/sysconfig/selinux

Add some sudo permissions for the nagios user:

# echo 'Defaults:nagios !requiretty' >> /etc/sudoers
# echo 'nagios ALL = (root) NOPASSWD: /usr/local/check_mk/agents/check_mk_agent.linux' >> /etc/sudoers

Create an inventory of services to monitor on our systems:

# check_mk -I master.example.com host-monitored-by-master.example.com

Rebuild and restart Nagios / Check_MK

# check_mk -R

Start the other services:

# service httpd start
# service npcd start

Create an HTTPD user and password (use the same htpasswd file on both systems):

# htpasswd -c -s /usr/local/nagios/etc/htpasswd.users nagiosadmin

Now to complete the installation of PNP4nagios, open http://master.example.com/[SITE NAME]/pnp4nagios/ in your browser, then remove or rename the installation script:

# mv /usr/local/pnp4nagios/share/install.php{,.orig}

Multisite Configuration

Once you repeat the above steps for the slave.example.com server, you’re ready to configure Multisite. You’ll want to create an SSH public/private keypair for the nagios user. This keypair will be used to establish the SSH tunnel, but nothing else. We’ll lock it down to keep things safe. The keypair should only reside on master.example.com, and we’ll copy the public key into slave.example.com’s authorized_keys file.

Master:

# su - nagios
$ mkdir .ssh
$ chmod 700 .ssh
$ cd .ssh
$ ssh-keygen -t rsa -b 4096 -f ./id_rsa
$ chmod 400 id_rsa
$ chmod 444 id_rsa.pub

Slave:

# su - nagios
$ mkdir .ssh
$ chmod 700 .ssh
$ cd .ssh
$ vi authorized_keys
    command="exit",no-pty,permitopen="localhost:80",permitopen="localhost:6557",permitopen="localhost:2000",permitopen="localhost:2001" ssh-rsa AAAAAAA..long-key..ZZZZZ

The options leading the authorized_keys file keeps the nagios user from being able to do pretty much anything except get our host data and forward a few ports. The only command it can run is “exit”, it can’t open a psuedo-terminal, and it can only forward the ports we’ve specified. Ports 80 (HTTP), 6557 (check_mk), 2000 & 2001 (SSH tunnel status checks).

We’re going to start by setting up xinetd on slave.example.com — don’t worry, it’ll only listen on localhost.

# yum install -y xinetd
# cat << 'END' > /etc/xinetd.d/livestatus
    service livestatus {
      bind            = 127.0.0.1
      type            = UNLISTED
      port            = 6557
      socket_type     = stream
      protocol        = tcp
      wait            = no
      cps             = 100 3
      instances       = 500
      per_source      = 250
      flags           = NODELAY
      user            = nagios
      server          = /usr/local/bin/unixcat
      server_args     = /usr/local/nagios/var/rw/live
      only_from       = 127.0.0.1 ::1
      disable         = no
      log_type        = SYSLOG daemon info
    }
  END 
# chkconfig xinetd on
# service xinetd start

Now on master.example.com we’re going to setup autossh to establish and maintain the SSH tunnel.

# yum install -y autossh
# su - nagios
$ cat << 'END' > ~/autossh.bash
    #!/bin/bash

    autossh -f -M 2000 -i ~nagios/.ssh/id_rsa  -L 8081:localhost:80 -L 6558:localhost:6557 -N nagios@slave.example.com
  END
$ chmod a+x autossh.bash
$ crontab -e
    @reboot bash ~/autossh.bash
$ ./autossh.bash

Lastly we need to tell Multisite about our other site and set up HTTPD to proxy the requests through our tunnel.

# vi /usr/local/check_mk/etc/multisite.mk
   sites = {
     "master" : {
       "alias" : "Master Site"
     },
     "slave": {
       "alias" : "Slave Site",
       "socket": "tcp:localhost:6558",
       "url_prefix": "/slave/",
     },
   }

# vi /etc/httpd/conf/httpd.conf
   <Location /slave>
       RewriteEngine On
       RewriteRule ^/.+/slave/(.*) http://localhost:8081/slave/$1 [P]
   </Location>

Go ahead and restart Nagios/Check_MK and HTTPD, and you should be all set.

# check_mk -R
# service httpd restart

If all went according to plan, you should be able to go to http://master.example.com/master/check_mk/ and see the systems monitored by both the master and slave servers! You should probably go ahead and configure HTTPD to use SSL, as well as configure iptables or another suitable software/hardware filewall to limit traffic appropriately.

My organization recently purchased around 100 high end Canon multi-function printers, along with a print queue management suite called uniFLOW. uniFLOW gives us a single, roaming print queue, secure print (badge required to get your print job!), and excellent print accounting metrics. With the aforementioned badge integration, we know exactly who is printing all those color copies of their fantasy team’s roster.

It is difficult to believe that here in 2014 organizations are still required to support faxing, but here we are. Even as part of an all new printer deployment, I find myself spending an inordinate amount of time configuring, troubleshooting, and supporting FAX.

The Canon multi-function printers with uniFLOW support simple scan to e-mail and copy functions, but fax is slightly more complicated. It’s less significantly less intuitive, for starters. There is no fax icon or menu action on the large touch screen display. Instead, you have to use the Scan to E-mail process. At which point, you e-mail the phone number you’d like to fax, and hit send. Easy enough to remember, but getting the back end configuration right to make it work is another challege. Here’s my configuration, in case someone else has the same difficulty.

First, make sure you’re running a recent version of RightFax. RightFax is software that allows an organization to deliver faxes to e-mail, obviating the need for every employee to have a personal fax machine — and most importantly, vice-versa: you can deliver faxes from e-mail to a fax number. Your organization may use something different, but I imagine the set up will be similar.

It took a bit of testing with RightFax’s Exchange connector and Wireshark to determine the correct To: header syntax, but here it is:

IMCEARFAX-Walk+20up+40_FN=AAABBBCCCC@domain.com

RightFax requires a recipient name and obviously a fax number destination. The above formatting includes both, broken down as follows:

IMCEARFAX-               RightFax prefix
Walk+20up                Recipient name - +20 being the hexadecimal ASCII code for a space
+40                      ASCII for the at-symbol
_FN=AAABBBCCCC           Fax number equals AAA-BBB-CCC

Now that we know the syntax, we need a way to translate a string of only numbers into that address — it would be an onerus requirement for an end-user to have to type in that ridiculously long e-mail address every time they wanted to send a fax. Thankfully, uniFLOW gives us everything we need to be successful.

Create an XML file on the uniFLOW server, e.g. C:\fax-to-email.xml:

<MOMEMAILCONVERSION>
  <RULE NAME="SENDER">%o</RULE>
  <RULE NAME="RECIPIENT">IMCEARFAX-Walk+20up+40_FN=/{\d+}/@FAXSERVER.domain.com</RULE>
  <RULE NAME="BODY">%o</RULE>
</MOMEMAILCONVERSION>

Edit the FAXSERVER.domain.com to be the FQDN of your RightFax server, then go into the uniFLOW server configuration interface, navigate to Printer -> Printer -> (The printer you want to configure) -> Next -> Device Agents tab. Scroll down to the bottom section titled Other / MIND SMTP Control. Here you can configure how the printer will handle different destination addresses. Set the Email Conversions field value to:

^{\d+}$:C:\fax-to-email.xml

Click Save, and then go send a fax!

Setting up a simple Jabber service on Red Hat Enterprise Linux (or any of its derivatives) is very straight-forward. If you install the Extra Packages for Enterprise Linux (EPEL) repository, you’ll have access to jabberd2.

The Quick StartGuide for RPM documentation provided by the jabberd2 project is almost all you need to get it up and going.

# yum install jabberd
# chkconfig jabberd on

The documentation tells you to switch to MySQL, but this is unnecessary. The default configuration uses an SQLite database for user registration and session storage, so you can set that up instead. To do so, simply initialize the database:

# sqlite3 /var/lib/jabberd/db/sqlite.db < /usr/share/jabberd/db-setup.sqlite

Then all that’s left is setting up your ID entries in c2s.xml and sm.xml:

c2s.xml:
    <local>
      <id register-enable='mu' password-change='mu'>domain.com</id>
      ...
    </local>

sm.xml:
    <local>
      <id>domain.com</id>
      ...
    </local>

Now you’re all set:

# service jabberd start

This, of course, only sets up the most basic implementation. There are a few extra steps I took to have a slightly more secure setup. I also wanted to host multiple domains for friends to use. By default:

To fix the encrypted password issue, you have to use MySQL instead of SQLite. There’s an open bug on GitHub to implement this feature, but it’s still waiting for patches. It’s pretty easy to change to MySQL, though. The database setup script, /usr/share/jabberd/db-setup.mysql, wants to create the database for you, which can cause problems if you’ve already created one, so go ahead and comment out the first few lines:

-- CREATE DATABASE jabberd2;
-- USE jabberd2;

Then you can create the database, its user, and run the script:

# mysql -u root -p
> CREATE DATABASE jabber;
> GRANT ALL ON jabber.* to 'jabber'@'localhost' IDENTIFIED BY 'password';
> QUIT
# mysql -u jabber -p jabber < /usr/share/jabberd/db-setup.mysql

Then edit both c2s.xml and sm.xml again, changing the DB driver to use MySQL instead of SQLite:

c2s.xml:
    <authreg>
      ...
      <module>mysql</module>
      ...
      <mysql>
        <host>localhost</host>
        <port>3306</port>

        <dbname>jabber</dbname>

        <user>jabber</user>
        <pass>password</pass>

        <password_type>
          <crypt/>
        </password_type>
      </mysql>
      ...
    </authreg>

sm.xml:
    <storage>
      ...
      <driver>mysql</driver>
      ...
      <mysql>
        <host>localhost</host>
        <port>3306</port>

        <dbname>jabber</dbname>

        <user>jabber</user>
        <pass>password</pass>

        <transactions/>
      </mysql>
      ...
    </storage>

This gives us a strong database backend with passwords that are somewhat securely stored. The hashing method uses the crypt(3) function to generate a salted MD5 hash. Without digging into the source code, I can’t tell how many rounds of MD5 it is, but it’s sort of irrelevant: MD5 is broken and unsuited for password hashing. Still, better than plaintext — basically only keeps honest people honest. Moving on…

To set up TLS/SSL you have a decision to make: self-signed or legitimate certificate. I prefer legitimate because these days you can get SSL certificates very cheaply (or even free). It’s a bit easier to set up self-signed though:

# openssl genrsa -out ./jabber.key 2048
# openssl req -new -key ./jabber.key -x509 -sha1 -out ./jabber.crt
# cat jabber.crt jabber.key >> combined.crt
# rm jabber.crt jabber.key

jabberd2 requires that they key (and any intermediate certificates) be present in the PEM file. I don’t like this approach — the key should not be readable by anyone but root. Unfortunately the jabberd2 processes aren’t clever enough read in the key as root and then drop privileges — patches are welcome, I’m sure.

If you want to go the legitimate certificate route, create a key and certificate signing request:

# openssl genrsa -out ./jabber.key 2048
# openssl req -new -key ./jabber.key -sha1 -out ./jabber.csr

Send the CSR to your certificate authority. They should send you back a certificate and one or more intermediate / root certificates. Your combined certificate should look something like this:

# cat certificate.crt intermediate.crt root.crt key.key > combined.crt

To install the SSL certificate (either self-signed or legitimate), you need to edit the <id> tag in c2s.xml:

<local>
  <id register-enable='mu' pemfile='/path/to/combined.crt' require-starttls='mu' password-change='mu'>domain.com</id>
  ...
</local>

My server requires all client connections to be encrypted. If you want to support TLS, but don’t care whether or not the connections are encrypted, just remove the require-starttls parameter.

Next up is setting the IP address to listen on. This is a very quick edit to c2s.xml and s2s.xml:

<local>
  ...
  <ip>0.0.0.0</ip>
  ...
</local>

Adding an administrator is similarly easy. Just edit the aci section of sm.xml:

<aci>
  ...
  <acl type='all'>
    <jid>admin@domain.com</jid>
  <acl>
  ...
</aci>

Securing the inter-service communication is fairly easy, but a little cumbersome. You need to edit all the service configuration files (c2s.xml, s2s.xml, and sm.xml) as well as the router configuration (router.xml) and the router users file (router-users.xml).

For each service, create a user entry in router-users.xml:

<users>
  <user>
    <name>c2s</name>
    <secret>random_password</secret>
  </user>
  <user>
    <name>s2s</name>
    <secret>random_password</secret>
  </user>
  <user>
    <name>sm</name>
    <secret>random_password</secret>
  </user>
</users>

Then edit router.xml to give access to each of these users:

<aci>
  <acl type='all'>
    <user>c2s</user>
    <user>s2s</user>
    <user>sm</user>
  </acl>
  ...
</aci>

While you’re there, go ahead and enable SSL:

<local>
  ...
  <pemfile>/path/to/combined.crt</pemfile>
  ...
</local>

You could probably be more granular in the ACLs, but I don’t know enough about what permissions are needed for each process. Next, you have to edit each service’s configuration file and change the user & password settings — make sure you match the passwords correctly to what you defined in router-users.xml:

<router>
  ...
  <user>c2s OR s2s OR sm</user>
  <pass>corresponding password</pass>
  <pemfile>/path/to/combined.crt</pemfile>
  ...
</router>

Now all your services should be using a unique password to communicate. In addition, the traffic will be encrypted.

All that remains is to set up multiple domains. Searching around the net I couldn’t find very much documentation, but thankfully it’s very easy. In previous versions of jabberd2, you needed to run an sm process for each separate domain. That no longer seems to be the case. You just need to edit c2s.xml, sm.xml, and configure appropriate SRV records. In c2s.xml, just add another ID entry for the second domain:

<local>
  ...
  <id register-enable='mu' pemfile='/path/to/combined.crt' require-starttls='mu' password-change='mu'>domain1.com</id>
  <id register-enable='mu' pemfile='/path/to/combined.crt' require-starttls='mu' password-change='mu'>domain2.com</id>
  ...
</local>

Pretty much the same thing in sm.xml:

<local>
  <id>domain1.com</id>
  <id>domain2.com</id>
</local>

Then add the SRV records to your DNS:

_xmpp-client._tcp.domain1.com. IN    SRV 5    5 5222 domain1.com.
_xmpp-server._tcp.domain1.com. IN    SRV 5    5 5269 domain1.com.

Do the same thing for domain2.com, leaving domain1.com as the destination at the end:

_xmpp-client._tcp.domain2.com. IN    SRV 5    5 5222 domain1.com.
_xmpp-server._tcp.domain2.com. IN    SRV 5    5 5269 domain1.com.

This configuration will let you use Jabber IDs for either domain1.com or domain2.com. Just make sure in your client’s configuration you put in the correct value for the Connect Server — in this example, that would be domain1.com.

Once you’ve created your accounts using a Jabber client, you might consider going back to c2s.xml and removing the register-enable parameter from the id tag. Otherwise any Internet user can create accounts on your server.

I recently set up Marco Arment’s Second Crack blogging platform on my RHEL 6 server. I had been using Wordpress for awhile, but got frustrated with its dynamic rendering strategy. I want a blogging engine that scales easily and plays nicely with nginx, not something that has to parse code and query databases just to get the content. Here’s how I did it.

Firstly, per Marco’s advice, I installed both the command-line interface to Dropbox and the inotifytools package from the EPEL repo. This allows me to write and edit blog posts from anywhere I have access to Dropbox and have my changes applied instantly. It’s really amazing to be able to edit blog posts in plaintext using Markdown syntax from my iPad mini. I cannot understate this.

I’m very minimalist when it comes to installing packages on my server. I generally start with the absolute Minimal distribution and only install packages as needed. There is no reason for a webserver to have Xorg, after all. This methodology presented a problem with Dropbox at first. The Dropbox for Linux page only lists packages for Ubuntu and Fedora (with an option to build from source). These packages, however, are for the desktop interface of Dropbox — which is not needed. Instead, I found a link to a simple dropbox.py utility which keeps everything in sync and only requires Python 2.6. Best of all, the installer and corresponding daemon run as an unprivileged user. Simply download the script and run the install command:

$ python dropbox.py start -i

After it finishes the download and install, it should prompt you to visit a URL to link the system to your Dropbox account. The installer will download the necessary libraries and runtime data into ~/.dropbox-dist/ and put the configuration stuff into ~/.dropbox/. By default it will create ~/Dropbox/ to hold the items you want synced. The only caveat to the Dropbox install is that the process will need to be relaunched after each reboot: python ~/dropbox.py start

Installing Second Crack is basically as straight-forward as he lays it out in the included README.md. Just pull down the repository, make the changes to the configuration file (blog name, URL, etc.), configure the crontab, and start designing your template (that’s where I spent the vast amount of my time).

By default, Second Crack will install a simple .htaccess file to perform the slug line URL redirects. Unfortunately, .htaccess files don’t translate directly to nginx, but in this case the needed configuration is incredibly easy:

index index.html
location ~ ^/blog/. {
    default_type text/html;
    try_files $uri $uri.html;
}

To stop Second Crack from automatically reinstalling the .htaccess file, simply comment out the line in {SECOND_CRACK}/engine/Updater.php, line 438 — but it really doesn’t matter:

//if (! file_exists(self::$dest_path . '/.htaccess')) copy(dirname(__FILE__) . '/default.htaccess', self::$dest_path . '/.htaccess'); 

If you’ve more or less followed along, you should be pretty much good-to-go. For going mobile, I literally searched the App Store for “text editor dropbox” and bought the first one: PlainText by Hog Bay Software. There are several things I absolutely love about this app:

Happy blogging!

Setup

The process is fairly straight forward, but there are some requirements:

  1. A TFTP server with some specific files. These should all be in the root of the TFTP directory.
    1. The latest version of the SheevaPlug U-Boot binary.
    2. The latest version of the Debian installer image (uImage and uInitrd).
    3. The latest version of the Linux kernel (optimized for SheevaPlug; uImage and Modules).
  2. A 2+ GB USB thumb drive.
  3. Terminal emulation software (GNU screen, PuTTY, minicom, HyperTerm) [Note: Mac OS X’s version of screen seems to have issues with the debian installer, so I used PuTTY in a VM.]

There’s a lot of interrupting the initial boot process, which requires fairly fast attachment of your terminal. If you’re using a VM environment, make sure you tell it to remember your “Attach to Host or VM?” preference, otherwise you’ll miss the interrupt prompt.

Upgrade U-Boot

  1. Reset the SheevaPlug
  2. Interrupt the boot process

    1. “Hit any key to stop autoboot:”
  3. Document the MAC address

    Marvell>> print ethaddr
    ethaddr=FF:FF:FF:FF:FF:FF
    
  4. Boot using TFTP

    Marvell>> setenv ipaddr x.x.x.x
    Marvell>> setenv serverip y.y.y.y
    Marvell>> tftpboot 0x0800000 u-boot.kwb
    Marvell>> nand erase 0x0 0x60000
    Marvell>> nand write 0x0800000 0x0 0x60000
    Marvell>> reset
    
  5. Fix the MAC address by interrupting the boot process (the new U-Boot loses the setting).

    Marvell>> setenv ethaddr FF:FF:FF:FF:FF:FF
    Marvell>> saveenv
    Marvell>> reset
    

Burn the new kernel

Marvell>> setenv ipaddr x.x.x.x
Marvell>> setenv serverip y.y.y.y
Marvell>> tftpboot 0x2000000 sheeva-3.4.7-uImage
Marvell>> iminfo
Marvell>> nand erase 0x100000 0x400000
Marvell>> nand write 0x2000000 0x100000 0x400000
Marvell>> setenv mainlineLinux yes
Marvell>> setenv arcNumber 2097
Marvell>> saveenv

Install Debian

For this section, I recommend using PuTTy on a Windows VM. Or at least something other than GNU Screen on Mac OS X. It doesn’t play nice with the Debian installer character set for some reason. You can try using SynchTERM on Mac OS X, but it still isn’t quite right and it becomes very easy to check the wrong boxes.

Marvell>> setenv ipaddr x.x.x.x
Marvell>> setenv serverip y.y.y.y
Marvell>> tftpboot 0x0400000 uImage
Marvell>> tftpboot 0x0800000 uInitrd
Marvell>> setenv bootargs console=ttyS0,115200 base-installer/initramfs-tools/driver-policy=most
Marvell>> bootm 0x0400000 0x0800000

Once in the installer, answer the defaults to all the questions, set a root password, etc. When you get to the disk partitioning, it should not detect any disks and will ask to configure iSCSI.

  1. Plug in a USB pen drive
  2. Use [TAB] to select Go Back
  3. Select “Disk Partitioning” again
  4. The USB drive is now detected
  5. Use default partitioning (everything in /)
  6. Be careful selecting packages. I only select the SSH server and NOT Standard System Utilities — otherwise you’ll fill the SheevaPlug
  7. Once finished, interrupt boot process and boot from USB:

    Marvell>> setenv bootargs_console console=ttyS0,115200
    Marvell>> setenv bootcmd_usb 'usb start; ext2load usb 0:1 0x0800000 /uInitrd; ext2load usb 0:1 0x400000 /uImage'
    Marvell>> setenv bootcmd 'setenv bootargs $(bootargs\_console); run bootcmd\_usb; bootm 0x400000 0x0800000'
    Marvell>> boot
    
  8. Set up UBIFS (so much faster than JFFS2; 2+ minute boot down to 20 seconds)

    # apt-get install mtd-utils
    # ubiformat /dev/mtd2 -s 512
    # ubiattach /dev/ubi_ctrl -m 2
    # ubimkvol /dev/ubi0 -N rootfs -m
    # mount -t ubifs ubi0:rootfs /mnt
    
  9. Clone the USB root to the internal flash (UBIFS)

    # mkdir /tmp/rootfs
    # mount -o bind / /tmp/rootfs/
    # cd /tmp/rootfs
    # sync
    # cp -a . /mnt/
    
  10. Fix the /mnt/etc/fstab file

    # cat << END > /mnt/etc/fstab
    /dev/root / ubifs defaults,noatime,rw 0 0
    tmpfs /var/run tmpfs size=1M,rw,nosuid,mode=0755 0 0
    tmpfs /var/lock tmpfs size=1M,rw,noexec,nosuid,nodev,mode=1777 0 0
    tmpfs /tmp tmpfs defaults,nosuid,nodev 0 0
    END
    
  11. Reboot — interrupt boot cycle, unplug USB

  12. Reconfigure the boot command

    Marvell>> setenv bootargs 'console=ttyS0,115200 ubi.mtd=2 root=ubi0:rootfs rootfstype=ubifs'
    Marvell>> saveenv
    Marvell>> reset
    
  13. Install some packages

    # apt-get install sudo
    

Resources: