j a m i e

Replacing monosnap + S3 bucket with a self-hosted option.

I put up a post before here on how to automate this (on MACs) with monosnap and an S3 bucket. However monosnap have gone and changed their pricing structure so now, it’s like $100 if you want to use your own filehost which sucks. It’s free if you use monosnap hosting and if you create an account but that sucks too, so I’ve been working on an alternative the last few weeks.

I won’t go into detail on the software I’m using (a modified version of this) because that’s not really the point of this post. Instead it’s more tailored towards setting yourself up to use https://up.loaded.ie. As a note you can manually upload pretty much any file you’d like, I’ve restricted .php and .sh files for obvious reasons so feel free to use anything else.

So monosnap is out, what I’ve been using recently is katana. This does screenshots pretty well, but no video yet which is a bit of a bummer. In short though, you can use any tool you’d like so long as it can upload via an upload.php file.

Assuming you too want to use katana, all you have to do is download and install it. Then open preferences and set it like so:

General.

Services -> Upload Service.

You can leave the url-shortener as default (or setup your own). Same for the shortcuts, I set mine to cmd+shift+f4.

Easy peasy.

Using goaccess to parse & display traffic for multiple sites

https://stats.honk.ie/

Long story short, I host a lot of websites at my lab. Everything runs through a central nginx reverse proxy. For a while I’ve thought about using stuff like loggly to parse these logs and throw up some neat data on where the traffic is going, when its busy etc. I stumbled upon goaccess last week and it revived my interest in it.

The good news is it’s super easy to setup and you can 100% automate it. I set mine up on the reverse proxy VM so I didn’t need to faff about with rsyncing master logs about – it takes 15-17 seconds to parse all my logs and run it through the maxmind geo-ip database and output the html so it’s pretty efficient. (that’s around 750mb of logfiles). I’m using ubuntu 16.04 also.

getting started

Installing goaccess;

echo "deb http://deb.goaccess.io/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list.d/goaccess.list
wget -O - https://deb.goaccess.io/gnugpg.key | sudo apt-key add -
sudo apt-get update
sudo apt-get install goaccess

There's a tonne of changes you can make to the /etc/goaccess.conf config files, here's what I did - just added these three lines to the top of the config;

time-format %T
date-format %d/%b/%Y
log_format %h - %^ [%d:%t %^] "%r" %s %b "%R" "%u"

And edited these lines;

html-report-title honk.ie stats - updated every 15 minutes
exclude-ip 10.0.0.0/24

With that done, lets create our folder to setup the scripts and automate this. I used /opt/goaccess/ and will use that for this example - feel free to choose whatever you want.

mkdir /opt/goaccess && cd /opt/goaccess && wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
tar -xzf GeoLite2-City.tar.gz && mv GeoLite2-City_*/GeoLite2-City.mmdb . && rm -rf GeoLite2-City_* && rm -rf GeoLite2-City.tar.gz

This makes the folder, downloads the maxmind geoip database - tars it and moves the database into the working directory and cleans up the leftovers.

Let's grab a master version of the current logs too;

zcat -f /var/log/nginx/*access.log* > /opt/goaccess/master.log && zcat /var/log/nginx/*access.log.*.gz >> /opt/goaccess/master.log

scripts

nano build.sh

#! /bin/bash

SERVER_LOG='/var/log/nginx/*.access.log'
MASTER_LOG='/opt/goaccess/master.log'
HTML_OUT='/var/www/stats.honk.ie/index.html'
BLACKLIST='/opt/goaccess/spammers.txt'

cat $SERVER_LOG >> $MASTER_LOG
awk -i inplace '!seen[$0]++' $MASTER_LOG

goaccess -f $MASTER_LOG $(printf -- "--ignore-referer=%s " $(<$BLACKLIST)) --geoip-database /opt/goaccess/GeoLite2-City.mmdb --agent-list --no-progress -o $HTML_OUT

NOTE: If you're using a different logs location, or using apache instead update these variables. Additionally make sure we're writing to where you want the index.html to be placed for your site. Before running this, make this folder and chown it so nginx can read it;

mkdir /var/www/yoursite.tld/ && touch /var/www/yoursite.tld/index.html  && chown -R www-data. /var/www/yoursite.tld

nano spam.sh

#! /bin/bash

cd /opt/goaccess/

wget --timestamping --output-file=wget-cron.log https://raw.githubusercontent.com/piwik/referrer-spam-blacklist/master/spammers.txt

crontab

Here's all I'm using for crontab entries;

*/15 * * * * /opt/goaccess/build.sh > /dev/null
0 3 * * 1 /opt/goaccess/spam.sh > /dev/null

So it updates the html every 15 minutes and every Monday morning at 3am updates the spammers.txt. All that's left to do is setup how you want these stats displayed, and direct nginx towards it and you're all set.

Pushing a Nintendo Switch through OPNSense.

The switch has some utterly garbage NAT rules, it’s mindbogglingly archaic. My housemate has one and couldn’t play Splatoon2 online due to it so I got digging.

The mischievous error;

How to fix it.

Login to OPNSense, services > DHCPv4 > LAN

The eagle eyed will spot that my DHCP range stops at 244. 245 is left alone and assigned only to the switch.

Here’s how to do it – Scroll way down and click the + button to add a new rule

Add in the Switch’s MAC address, give it a static IP and set the gateway & DNS servers.

Next up, go to Firewall > NAT > Outbound. Select Hybrid and hit save.

Create a new Rule on the same page and set it up like in the screenshot.


This will have you all setup to join games on the Switch. If you’re still having issues, enable UPNP if it’s not enabled and reboot the Switch afterwards. 👌

Recovering OPNsense access after losing 2factor access.

This has been particularly painful for me the last fortnight, ever since my last phone died and took with it all my 2factor logins. (scratch codes are overrated, right?).

The general consensus from the OPNsense forums was booting with a live image, resetting the password, cancelling the install and then rebooting the old image was the way to go. I put this off because 1. I was being lazy and 2. I was wary of doing this when my OPNsense image is a bit customized.

This morning I had some spare time after fixing plex and I was looking through alternatives. The backup configs saved me once before when I mangled the drivers for the 10g switch so I started with looking there. Sidenote, this is way too awesome to not leverage if you’re already using OPNsense, check it out.

Anyway, if you check through the config and search for ‘root’, towards the bottom of that block you’ll see a OTP string. Grab that badboy and put it in your 2factor on your new device to setup a new code. Login working again as if nothing ever changed.

I’m undecided on if this is a security flaw or not, people should be backing up to secure locations to begin with but it’s still a free scratch code. Works for now anyway!

Increasing amount of allowed processes on Zabbix hosts.

Tired of looking at something like this?

The default amount of processes that Zabbix will trigger an alert for is 300, in the image above my Zabbix server never even dips below it, always above so it’s been a (harmless) alert for about two months for me.

Turns out fixing it is pretty easy though, all we need to do is edit the template to a more ‘reasonable’ number.

1. Login to Zabbix and head to Configuration
2. Templates
3. Template OS Linux
4. Triggers
5. Too many processes on {HOST.NAME}

I updated mine from 300 to 450, giving a little overhead for future growth while still allowing it to alert. Back down to 0 alerts and a clean bill of health, nice!