j a m i e

Using goaccess to parse & display traffic for multiple sites

https://stats.honk.ie/

Long story short, I host a lot of websites at my lab. Everything runs through a central nginx reverse proxy. For a while I’ve thought about using stuff like loggly to parse these logs and throw up some neat data on where the traffic is going, when its busy etc. I stumbled upon goaccess last week and it revived my interest in it.

The good news is it’s super easy to setup and you can 100% automate it. I set mine up on the reverse proxy VM so I didn’t need to faff about with rsyncing master logs about – it takes 15-17 seconds to parse all my logs and run it through the maxmind geo-ip database and output the html so it’s pretty efficient. (that’s around 750mb of logfiles). I’m using ubuntu 16.04 also.

getting started

Installing goaccess;

echo "deb http://deb.goaccess.io/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list.d/goaccess.list
wget -O - https://deb.goaccess.io/gnugpg.key | sudo apt-key add -
sudo apt-get update
sudo apt-get install goaccess

There's a tonne of changes you can make to the /etc/goaccess.conf config files, here's what I did - just added these three lines to the top of the config;

time-format %T
date-format %d/%b/%Y
log_format %h - %^ [%d:%t %^] "%r" %s %b "%R" "%u"

And edited these lines;

html-report-title honk.ie stats - updated every 15 minutes
exclude-ip 10.0.0.0/24

With that done, lets create our folder to setup the scripts and automate this. I used /opt/goaccess/ and will use that for this example - feel free to choose whatever you want.

mkdir /opt/goaccess && cd /opt/goaccess && wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
tar -xzf GeoLite2-City.tar.gz && mv GeoLite2-City_*/GeoLite2-City.mmdb . && rm -rf GeoLite2-City_* && rm -rf GeoLite2-City.tar.gz

This makes the folder, downloads the maxmind geoip database - tars it and moves the database into the working directory and cleans up the leftovers.

Let's grab a master version of the current logs too;

zcat -f /var/log/nginx/*access.log* > /opt/goaccess/master.log && zcat /var/log/nginx/*access.log.*.gz >> /opt/goaccess/master.log

scripts

nano build.sh

#! /bin/bash

SERVER_LOG='/var/log/nginx/*.access.log'
MASTER_LOG='/opt/goaccess/master.log'
HTML_OUT='/var/www/stats.honk.ie/index.html'
BLACKLIST='/opt/goaccess/spammers.txt'

cat $SERVER_LOG >> $MASTER_LOG
awk -i inplace '!seen[$0]++' $MASTER_LOG

goaccess -f $MASTER_LOG $(printf -- "--ignore-referer=%s " $(<$BLACKLIST)) --geoip-database /opt/goaccess/GeoLite2-City.mmdb --agent-list --no-progress -o $HTML_OUT

NOTE: If you're using a different logs location, or using apache instead update these variables. Additionally make sure we're writing to where you want the index.html to be placed for your site. Before running this, make this folder and chown it so nginx can read it;

mkdir /var/www/yoursite.tld/ && touch /var/www/yoursite.tld/index.html  && chown -R www-data. /var/www/yoursite.tld

nano spam.sh

#! /bin/bash

cd /opt/goaccess/

wget --timestamping --output-file=wget-cron.log https://raw.githubusercontent.com/piwik/referrer-spam-blacklist/master/spammers.txt

crontab

Here's all I'm using for crontab entries;

*/15 * * * * /opt/goaccess/build.sh > /dev/null
0 3 * * 1 /opt/goaccess/spam.sh > /dev/null

So it updates the html every 15 minutes and every Monday morning at 3am updates the spammers.txt. All that's left to do is setup how you want these stats displayed, and direct nginx towards it and you're all set.

Pushing a Nintendo Switch through OPNSense.

The switch has some utterly garbage NAT rules, it’s mindbogglingly archaic. My housemate has one and couldn’t play Splatoon2 online due to it so I got digging.

The mischievous error;

How to fix it.

Login to OPNSense, services > DHCPv4 > LAN

The eagle eyed will spot that my DHCP range stops at 244. 245 is left alone and assigned only to the switch.

Here’s how to do it – Scroll way down and click the + button to add a new rule

Add in the Switch’s MAC address, give it a static IP and set the gateway & DNS servers.

Next up, go to Firewall > NAT > Outbound. Select Hybrid and hit save.

Create a new Rule on the same page and set it up like in the screenshot.


This will have you all setup to join games on the Switch. If you’re still having issues, enable UPNP if it’s not enabled and reboot the Switch afterwards. 👌

Recovering OPNsense access after losing 2factor access.

This has been particularly painful for me the last fortnight, ever since my last phone died and took with it all my 2factor logins. (scratch codes are overrated, right?).

The general consensus from the OPNsense forums was booting with a live image, resetting the password, cancelling the install and then rebooting the old image was the way to go. I put this off because 1. I was being lazy and 2. I was wary of doing this when my OPNsense image is a bit customized.

This morning I had some spare time after fixing plex and I was looking through alternatives. The backup configs saved me once before when I mangled the drivers for the 10g switch so I started with looking there. Sidenote, this is way too awesome to not leverage if you’re already using OPNsense, check it out.

Anyway, if you check through the config and search for ‘root’, towards the bottom of that block you’ll see a OTP string. Grab that badboy and put it in your 2factor on your new device to setup a new code. Login working again as if nothing ever changed.

I’m undecided on if this is a security flaw or not, people should be backing up to secure locations to begin with but it’s still a free scratch code. Works for now anyway!

Increasing amount of allowed processes on Zabbix hosts.

Tired of looking at something like this?

The default amount of processes that Zabbix will trigger an alert for is 300, in the image above my Zabbix server never even dips below it, always above so it’s been a (harmless) alert for about two months for me.

Turns out fixing it is pretty easy though, all we need to do is edit the template to a more ‘reasonable’ number.

1. Login to Zabbix and head to Configuration
2. Templates
3. Template OS Linux
4. Triggers
5. Too many processes on {HOST.NAME}

I updated mine from 300 to 450, giving a little overhead for future growth while still allowing it to alert. Back down to 0 alerts and a clean bill of health, nice!

Upgrading RAID cards in a Dell machine.

This was terrifying for me first time, so don’t do what I did. It’s pretty painless if you take your time and have backups in place beforehand. Lets dive in!

So first off, migrate off all the VMs you need to keep online to another host. Make backups of anything left because you’ll wish you did if anything goes wrong. I’m going to use the H700 as an example in this post. I did this without making a backup first – I was being dumb and lazy, make backups!

Shut down the remaining VMs, set it to maintenance mode and shut her down.

Swap out the old RAID card, replace it with the new one. Ensure the battery is hooked up and the cabling is set correctly – SAS A to SAS A etc.

Boot it back up, then use this guide to live-patch your host to get the drivers etc for your new RAID card (h700). It’s going to take a while to finish, reboot afterwards.

Upon booting up, hit ctrl+r to get back to the RAID configuration. It will complain that all your drives are lost, don’t worry. Hit c to load configuration and y to confirm.

When you’re in the RAID config, hit F2 and import foreign config. This will load the previous config on your drives.

Now, this part is important. I don’t know why, but sometimes it requires the RAID to be rebuilt despite nothing changing. Two of my servers didn’t need to, two did – go figure. If it does, it will automatically start an operation in the RAID config screen called back init. For me, this took around 35 minutes however if you have a much larger setup it’s obviously going to take longer.

So when that’s finished – or if you didn’t need to rebuild, lets get that datastore back on ESXi!

After rebooting again and letting it boot up, you’ll probably get quite a fright that your datastore isn’t there and all VMs are throwing errors – to add to it too in ESXi you can’t even see the datastore to add it!

This is where VSphere VCenter comes in (the naming structure of this kills me).

Login to VCenter and navigate to your host, click on ‘Actions’ -> ‘Storage’ -> ‘New Datastore’.

Select your datastore type, hit next and it should show up here.

From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column.

Note: The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.

Under Mount Options, you’ll be looking for this option Keep Existing Signature: Persistently mount the LUN

Review & finish – If you hop back to ESXi your VMs should all be populated again – on a sidenote you may need to reconfigure globallogging again if that’s something you do.

To do this go to ESXi -> ‘Manage’ -> ‘System’ -> ‘Advanced Settings’ and search for Syslog.global.logDir

Set it up like so, swap my datastore name for yours and you’re all done.