From MakerSpace Leiden
Jump to: navigation, search

The front end server (mid 2018) runs on Linux; in a `cloud' hosted environment.

Below documents the initial setup of the base machine; followed by the setup for each of the modules.

The final section shows the monthly and annual maintenance cycles.

Setup and rudimentary hardening

  • Get the machine in a known state and install sudo (so we can disable root; and comply with 'named accounts' only policies):
apt update
apt upgrade
apt install sudo certbot certbot-apache moreutils
  • enable ufw and allow the usual ports in IPv4 and IPv6
 for port in 22 25 53 80 443 1883
   ufw allow  $port
 # do this last so we're not kicked out.
 ufw enable
  • create named accounts for each of the admins (you need to get everyones their public SSH key):
 adduser \
  --system \
  --shell /bin/bash \
  --gecos 'Dirk-Willem van Gulik' \
  --group \
  --ingroup admin \
  --disabled-password \
  • Add an ssh key for each of these users
  • check that you can log in; and sudo with at least one of them.
  • Block root login and passwords in /etc/ssh/sshd.conf:
 PermitRootLogin no
 PasswordAuthentication no
 ChallengeResponseAuthentication no
Note: if you did not check the sudo/login of an admin user - they you are about to lock yourself out upon reboot.
  • Edit /etc/sysctl.conf to block spoofing, ICMP broadcast, source-packet routing, send redirect, SYN attacks, Martians and ICM redirects.
  • Prevent IP spoofing for DNS by replacing multi on to nospoof on in /etc/hosts.conf
  • Securing shared memory.
 echo tmpfs /run/shm tmpfs defaults,noexec,nosuid 0 0 >> /etc/fstab
  • Make sure we trap all crontab outputs systemctl edit cron.service:
  • Reboot.

MQTT Server install

Installed with:

  sudo ufw allow 1883
  sudo apt-get update
  sudo apt-get install mosquitto

Enable on boot and start with

   sudo systemctl enable mosquitto 
   sudo systemctl start mosquitto

check with

  systemctl status mosquitto 

Then secure with


Setup of the basic website

We need to first set up a very basic website; in order to be able to fetch the required SSL certificates.

  • Install apache and certbot and the integration glue between the two:
sudo apt install apache2 certbot python-certbot-apache
  • Request the needed certs:
sudo certbot --apache -d -d -d
  • Ensure they get renewed; and that the admins are emailed when this goes wrong:

Setup of the MTA

Log in as one of the admins.

  • Make sure /etc/mailname is set to
  • Install postfix and basic mail stuff:
 sudo apt install postfix mailtools postgrey mavisd-new spamassassin clamav-daemon libnet-dns-perl libmail-spf-perl pyzor razor  arj bzip2 cabextract cpio file gzip  nomarch pax rar unrar unzip zip postsrsd
  • Edit the /etc/aliases file to redirect the mail of 'root' and update:
 sudo vi /etc/aliases
 sudo newaliases
  • Edit the certs in /etc/postfix/ # needed for SRS
 ... at the end add ...
 # SPF rewrite
 sender_canonical_maps = tcp:localhost:10001
 sender_canonical_classes = envelope_sender
 recipient_canonical_maps = tcp:localhost:10002
 recipient_canonical_classes= envelope_recipient,header_recipient

and check that your apt-install has created an /etc/postsrd.secret file that is root only. If not created it with a few 100 bytes of randomness.

  • Make sure there is a postfix restarted in /etc/letsencrypt/renewal-hooks/post. As otherwise postfix won't see your fresh certs. A shell script with just something like below should do the trick:
  service postfix restart
  • Give the virus/spam scanner mutual access:
 sudo adduser clamav amavis
 sudo adduser amavis clamav
  • As the amavis user; razor-admin -create and razor-admin -register if needed.
  • Edit /etc/amavis/conf.d/15-content_filter_mode to activate the filters and restart:
 sudo /etc/init.d/amavis restart
  • The whitelist for postgray contains a number of entries for the various deur logging systems:
... to do ...

Setup of Apache

   sudo apt install apache2 letsencrypt

Configure let's encrypt and add the flag reuse-key to CLI.ini. This is to allow the payment system to keep trusting the server even during let's encrypt updates every 90 days.

Setup of WordPress

  • Install enough of the LAMP stack to get going:
 sudo apt install mysql-server php libapache2-mod-php php-mysql wordpress-theme-twentyseventeen wordpress
  • Secure your mysql install:
 sudo mysql_secure_installation
  • Configure apache:
  cat <<EOM > /etc/apache2/sites-available/wordpress.conf
 <VirtualHost *:80>
     DocumentRoot /var/www/html
     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined
     RewriteEngine On
     RewriteCond %{HTTPS} off
     RewriteRule (.*) https://%{SERVER_NAME}/$1 [R,L]
 Listen *443
 <VirtualHost *:443>
     SSLCertificateFile  /etc/letsencrypt/live/
     SSLCertificateKeyFile /etc/letsencrypt/live/
     SSLProtocol all -SSLv2 -SSLv3
     SSLHonorCipherOrder On
     DocumentRoot /usr/share/wordpress
     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined
     <Directory /usr/share/wordpress>
           Options FollowSymLinks
           AllowOverride Limit Options FileInfo
           DirectoryIndex index.php
           Order allow,deny
           Allow from all
     <Directory /usr/share/wordpress/wp-content>
           Options FollowSymLinks
           Order allow,deny
           Allow from all
  • Create a database and a config file (changing yourpasswordhere123):
 cat <<EOM |  sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf
 ON wordpress.*
 TO wordpress@localhost
 IDENTIFIED BY 'yourpasswordhere123';
 cat <<EOM > /etc/wordpress/config-localhost.php
 define('DB_NAME', 'wordpress');
 define('DB_USER', 'wordpress');
 define('DB_PASSWORD', 'yourpasswordhere123');
 define('DB_HOST', 'localhost');
 define('WP_CONTENT_DIR', '/usr/share/wordpress/wp-content');
 define('FS_METHOD', 'direct');
  • Secure this file so that only the webserver can see this password.
    sudo chown root:www-data /etc/wordpress/config-localhost.php
    sudo chmod o-rwx,g-wx /etc/wordpress/config-localhost.php
  • Enable, kill default and restart:
  sudo a2ensite wordpress
  sudo a2dissite 000-default 
  sudo systemctl reload apache2
  • Check that it all works by visiting
  • If you want to be able to update in place - be sure to have the wordpress content directory `www-data' owned.

Updaten van Word press

Zie WordpressUpdate

Setup of Media Wiki

  • Install the base packages
 apt install mediawiki imagemagick php-apcu php-intl
  • Create the database:
cat <<EOM |  sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf
ON mediawiki.*
TO mediawiki@localhost
IDENTIFIED BY 'yourpasswordhere12443';
  • Restart apache.
  • Go to the wiki - and use the config values from above DB setup.,
  • Follow the instructions and copy the generated LocalSettings.php to the specified location.
  • Add the various users.

making a backup and importing it

  • Backup script:
 set -e
 set -x
 D=`date +%Y%m%d%H%M%S`
 mkdir media-wiki-backup.$D
 cd media-wiki-backup.$D
 mysqldump --user=wikiuser --password="XXXX" wikidb > file.sql
 mysqldump --user=wikiuser --password="XXXX" wikidb --xml > file.xml
 cp -r /usr/local/www/mediawiki/images .
 cp /usr/local/www/mediawiki/LocalSettings.php .
 cd ..
 tar zcf media-wiki-backup-$D.tgz media-wiki-backup.$D
 rm -rf  media-wiki-backup.$D
  • Dump the full wiki:
    jexec mls  ... script ..
  • Copy the file across.
  • Import it on the other machine
  set -ex
  if ! test -f latest.tgz; then
   		echo no last dump.
   		exit 1
   mkdir tmp.$$
   cd tmp.$$
   tar zxf ../latest.tgz
   cd *
   rm -rf /var/lib/mediawiki/cache /var/lib/mediawiki/images
   mkdir import /var/lib/mediawiki/cache /var/lib/mediawiki/images
   cp -r images/? /var/lib/mediawiki/images
   chown -R www-data:www-data /var/lib/mediawiki/images  /var/lib/mediawiki/cache
   cat <<EOM |  sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf
   DROP DATABASE mediawiki;
   CREATE DATABASE mediawiki;
   ON mediawiki.*
   TO mediawiki@localhost
   cat file.sql | sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf mediawiki
   sudo -u www-data /usr/share/mediawiki/maintenance/update.php
   sudo -u www-data php  /usr/share/mediawiki/maintenance/cleanupImages.php
   sudo -u www-data php /usr/share/mediawiki/maintenance/rebuildall.php 
   sudo -u www-data php /usr/share/mediawiki/maintenance/rebuildImages.php 
   ) || exit 1
   rm -rf tmp.$$
   exit $?

Monthly Reporting of deelnemers count

The provider has set up a crontab (through their support desk, manual action) that emails a uu-encoded gzipped lists of subscribers to all list owners.

  • The user 'listreporter' has been created as an alias in /etc/aliases
 listreporter: |/etc/mailman/
  • and is activated with newaliases.

This is wired up to a swapping script (requiring apt install moretools) and a script that is somewhat paranoid and parses/unpacks the email:

 set -e
 umask 077
 test -x /etc/mailman/ || exit 0
 	mkdir $TMPDIR
 	cd $TMPDIR
       # limit sizeto prevent naughtyness
 	dd bs=1k count=32 -out=in.msg
 	/usr/bin/perl /etc/mailman/
  ) 2>&1 | ifne mail -s "Fault with the list reporter."
  rm -rf "${TMPDIR}"
  exit 0

The somewhat unusual ifne is from moretools and ensures that mail only runs if there is actual stdin text; so you do not get empty emails if all is well. This wrapper then wired to the script sending out the actual email:

  # use strict;
  use IO::CaptureOutput qw/capture_exec/;
  open(STDIN,"in.msg") or die "Could not open msg file: $!\n";
  my $list;
  my $subject;
  while(<STDIN>) {
          last if m/^\s*$/;
          $subject = $1 if m/^Subject\s*:\s*(.*)$/;
  open STDIN, '</dev/null'; # Prevent warnings deep down in IO capture.
  $list = $1 if $subject =~ m/List subscriber file:\s*(\S+)/;
  # exit 0 unless $list;
  die "Not the right subject <$subject>\n" unless $list;
  die "Not a list I manage." unless $list eq 'deelnemers';
  my ($stdout, $stderr, $success, $exit_code) = capture_exec("munpack","-q","in.msg");
  die "Unpacking failed: $!\n" unless $exit_code == 0;
  my $d = ;
  foreach my $f (<subscribers.$list.*.gz>) {
          next unless $f =~ m/subscribers.$list.([0-9\-]+).txt.gz/;
          $d = $1;
          system("cat '$f' | gunzip -c | dd bs=1k count=32 of=list.txt") == 0
                  or die "Could not unpack $f\n";
          open(FH,'list.txt') or die "Could not open subscriber list";
          my @list = ();
          while(<FH>) {
                  next if m/^\s*$/;
                  push @list, $_;
          my $count= @list;
          open(FH,"| mail -s 'Makerspace Leiden, $count $list on $d' -aFrom:hetbestuur\  $list\")
                  or die "Cannot open pipe.\n";
          print FH <<"EOM";
  Hello All,
  We have currently $count makers united, listed below:
  Your can edit your own information here:

  and this is also where you can switch to a digtest (just one big message/day)
  or see the archive of historic messages.
          print FH join("\n", @list);
          print FH "\n-- \nStiching Makerspace Leiden / hetbestuur\\n\n";

this is then matched by sufficient permission to allow this post to go through moderation unchecked. We should get an email from the hoster on the first of every month.

MQTT Monitoring

A simple perl script listens to the MQTT bus; and flags up any node that has not reported in in a while.

  use strict;
  use Email::MIME;
  use Email::Sender::Simple qw(sendmail);
  my $TO = '';
  my $reporting_period = 600;
  # The XDG_CONFIG_HOME is to prevent a warning about the Euid not having a home dir.
  open(STDIN,"XDG_CONFIG_HOME=/tmp /usr/bin/mosquitto_sub -v -h -t 'log/#' |") or die $@;
  # We rotate the logs - as to be able to compress and retire them; thus preventing a full
  # disk. We could also modify this - and rely on systemd to do such for us.
  # We use a sub-directory of 'log' as to allow us to be ran as `nobody'. That is a bit
  # safer, especially as we get 'data from the raw internet'
  open(STDOUT,"|/usr/bin/rotatelogs -lf /var/log/mqtt/doorlog.%Y.%m.%d 86400") or die $@;
  my %lastseen = ();
  my %reported = ();
  # Take a list of items; and return them with a 'and' between the one but last 
  # and last one. Just to make things look a bit nicer during reporting.
  sub nice_join {
      return "none" if !@_;
      my $last = pop;
      return $last if !@_;
      return join(', ', @_) . " and $last";
  $SIG{ALRM} = sub {
  	my @lost = ();
  	map {
  		push @lost, $_
  			if time() - $lastseen{ $_ } > $reporting_period *2;
  	} keys %lastseen;
  	if (@lost) {
  		my @nope = ();
  		map { 
  			if ($reported{ $_ } ++ > 3) {
  				delete $reported{ $_ };
  				delete $lastseen{ $_ };
  				push @nope, $_;
  		} @lost;
  		my $subject = sprintf("The connection%s to doornode%s %s %s down.", 
  				(@lost > 1) ? "s" : "", 
  				(@lost > 1) ? "s" : "", 
  				(@lost > 1) ? "are" : "is"
  		my $ps = ;
  		$ps = "PS: I won't be reporting on ".nice_join(@nope)." any more - down too long.\n"
  			if (@nope);
  		my $message = Email::MIME->create(
  		  header_str => [
  		    From    => 'Monitoring MSL <>',
  		    To      => $TO,
  		    Subject => $subject
  		  attributes => {
  		    encoding => 'quoted-printable',
  		    charset  => 'ISO-8859-1',
  		  body_str => "$subject If you are at the makerspace - please be so kind run a reset.\n".
  		  	"See the table at:\n".
  			"for the right port/cable to unseat for 5 seconds and plug back in. That should ".
  			"fix it. If you do so - let the mailing list know.\n".
  			"As the doornodes remember recent keys - you may still be able to open the door, ".
  			"especially if you entered in the last week or so. But then again - perhaps not.\n".
  			"Alternatively Aart, Hans and Dirk have physical spare keys. Dirk's key can be ".
  			"picked up during officehours at the janvossensteeg.\n".
  			"Thanks !\n\n$ps\n"
  	alarm $reporting_period;
  alarm $reporting_period;
  while(<STDIN>) {
 	# We're on a raw bus - so do a modicum of filtering.
 	s/[^[:ascii:]]//g;   	print;
  	next unless m/\[(\S+?)\]/;
  	$lastseen{$1} = time();
  die "Lost MQTT connection.\n";

This script is tarted/managed by a trivial startup file.

  # /etc/systemd/system/mslmon.service 
  Description=MSL Door node montitoring on MQTT
  Restart=on-failure # or always, on-abort, etc

Manage with the usual systemd commands such as

   $ systemctl status mslmon.service
   ● mslmon.service - MSL Door node montitoring on MQTT
     Loaded: loaded (/etc/systemd/system/mslmon.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2018-07-23 17:03:46 CEST; 8min ago
   Main PID: 31903 (
      Tasks: 4 (limit: 4583)
  CGroup: /system.slice/mslmon.service
          ├─31903 /usr/bin/perl /home/dirkx/
          ├─31914 sh -c /usr/bin/mosquitto_sub -v -h -t 'log/#'
          ├─31918 /usr/bin/rotatelogs -lf /var/log/mqtt/doorlog.%Y.%m.%d 86400
          └─31919 /usr/bin/mosquitto_sub -v -h -t log/#

And check the logs with

   $ sudo journalctl -u mslmon.service
   -- Logs begin at Sat 2018-06-23 13:19:36 CEST, end at Mon 2018-07-23 17:13:52 CEST. --
   Jul 23 17:03:46 systemd[1]: Started MSL Door node montitoring on MQTT.

Finally - there is a crontab that purges things after 90 days:

  # Retain only the last 90 days.
  4 4 * * *       root    test -d /var/log/mqtt && ( ls /var/log/mqtt/doorlog.* | sort -n | tail +90 | while read f; do rm "$f"; done )

Obviously - this could be improved by having journal/systemd take care of this. And the script could be more clever about telling that a node came back up or taking into account the loss of the ADSL or MQTT brokers. And distingsuish these as separate failure modes.


MTA-Backups / Duplicty

Ransomware/targeted risk

This approach is not overly resistant against a targeted delete - as the sftp user can delete/modify files (as the retention is currently done from the 'source'). This is, to some extend, mitigated by snapshots -- but not sufficiently at this time.

Virus scan

There is a malware/virus scan set up across the uploads; with automatic refresh of the signature DBs:

... todo

DKIM setup

Packages and perms:

 sudo apt-get install opendkim opendkim-tools postfix-policyd-spf-python postfix-pcre
 sudo adduser postfix opendkim

Add to /etc/postfix/

 policyd-spf  unix  -       n       n       -       0       spawn
     user=policyd-spf argv=/usr/bin/policyd-spf

And add to its smtpd_recipients_restrictions a

 check_policy_service unix:private/policyd-spf

Add/edit /etc/opendkim.conf as needed and force the right perms:

  chmod u=rw,go=r /etc/opendkim.conf
  mkdir /etc/opendkim
  mkdir /etc/opendkim/keys
  chown -R opendkim:opendkim /etc/opendkim
  chmod go-rw /etc/opendkim/keys

and generate the keys.

PAS OP: the generated DNS stub is wrong - it needs to have the 'rsa-' prefix removed from the bash. Know bug from 2017 not yet fixed.

Add to postfix its mail

 # Milter configuration
 # OpenDKIM
 milter_default_action = accept
 # Postfix ≥ 2.6 milter_protocol = 6, Postfix ≤ 2.5 milter_protocol = 2
 milter_protocol = 6
 smtpd_milters = local:opendkim/opendkim.sock
 non_smtpd_milters = local:opendkim/opendkim.sock

Tunnel for the kWh meter

See MTA-Setup kWh.