j a m i e

Creating an s3 image bucket and automating it.

One thing I’ve always wanted to do was create my own image host where I can dump screenshots and random junk. Tools like Sharex (windows) and Monosnap (mac) are great and have the functionality built in to both take the screenshot and upload it wherever you want. Sharex especially comes with a boatload of other features and it’s (In my opinion) one of the best tools currently available for windows.

Getting started

So, to get started go and grab a domain name or choose a subdomain you want to use for this. I recommend pairing this with Cloudflare as you’ll need to use CNAME flattening if you’re using a full domain.

So we’re going to be using Amazon S3 for this, it’s super easy to setup and we’ll have everything sorted within 30 minutes easy. We’ll also be using the EU-West region because Ireland is an option, hell yeah.

Creating your S3 bucket

So go ahead and get logged into amazon or create an account if you haven’t already. Once you’re in, click on services and select s3. You’ll want to name your bucket the same as your domain, so www.domain.com or domain.com, depending on your preference, then choose your preferred bucket location. You don’t need to change any options and can click next, next to finish up.

Creating the necessary S3 permissions

With that created, you now need to create an additional user so again, browse to services and select ‘IAM’, then click ‘Users’ and ‘Add new user’. Tick ‘Programmatic access’ and *not* console access. Click next and then, ‘Attach existing policies directly’, then search for ‘s3’ and tick the permission ‘AmazonS3FullAccess’. Next, create user.

Next you want to do is generate access keys to attach to Sharex/Monosnap to allow them to upload images. Services, IAM and click on the User you created then click on ‘security credentials’. In here you can create two access keys, one for each program if you use both windows and mac, or just one if you’re using either.

Linking it to your snap tool

For monosnap, this is super easy.

Just replace the keys, set your region, select your bucket and your path. I suggest putting it in a subfolder just to keep it tidy. The base URL ensures your domain name is used and not the long amazon s3 URL. If you’re using a subfolder, you should create it within the bucket in the amazon console also.

Sharex is pretty similar;

Both programs also support randomizing the imagename which is pretty cool too.
Sharex is in Hotkey Settings > Action > File Naming
Monosnap is in Preferences > Advanced

Enabling static website hosting at S3

To clean up some loose ends, click into the bucket again and click ‘properties’ then enable ‘Static website hosting’.

Whip together a quick index.html and error.html and drop them both inside the bucket, ensure both are public and readable.

DNS

Next, go back to Cloudflare and lets CNAME the domain. Create a CNAME for www and point it to

bucketname.s3-website-eu-west-1.amazonaws.com

Note: If you’re using buckets from a different region this URL can be different so check this page again to get an idea what you’ll need.

Do the same for domain.com, then hop into the ‘Crypto’ tab on Cloudflare and set it to Flexible. Lastly, hop into Page Rules and make a new page rule. For domain enter http://*yourdomain.com* and choose the ‘always use HTTPS’ rule, then save and deploy.

Then test out with uploading a screenshot, enjoy!

Moving your Plex server to a larger partition.

As your plex library grows, your metadata and caches will start ballooning. For those that install Plex via scripts like Quickbox, this presents a very real problem as more often than not there’s only 20-50gb or so reserved to root which won’t be enough.

Luckily, moving plex is simple.

First off, kill plex

/etc/init.d/plexmediaserver stop or killall plex if you’re impatient.

Next copy your plex folder to it’s new home,

cp -rf /var/lib/plexmediaserver/Library/Application\ Support /path/to/newdir/

then move it to a new folder

mv /var/lib/plexmediaserver/Library/Application\ Support /var/lib/plexmediaserver/Library/AS

Then we symlink it to it’s new home,

ln -s /path/to/newdir/Application\ Support /var/lib/plexmediaserver/Library/Application\ Support

and chown it

chown -R plex. /var/lib/plexmediaserver/Library/Application\ Support

Start up Plex /etc/init.d/plexmediaserver start and test it out, if everythings working as it should be you can go ahead and remove it’s old backup rm -rf /var/lib/plexmediaserver/Library/AS.

Easy peasy, 5 minutes and you’re golden – no more 100% disks.

Cloudflare & WordPress Logins

Cloudflare can do some pretty nifty stuff to add some extra security to any sensitive pages, not just to WordPress. For this example though, we’ll be using WordPress.

On all my domains, I’ve setup a new page rule to give added security to the dashboard login pages. What this means is that anyone trying to use wp-login.php, instead of being served the page raw will instead go through a Cloudflare browser check first. Essentially it’s just checking if the traffic is legitimate before allowing it any further.

To set it up is pretty simple, all you have to do is login to Cloudflare, select the domain you want and click on the ‘page rules’ tab. From there, create new rule and set it up like so,

That’s pretty much all there is to it, if you’re using Cloudflare already there’s no reason not to be leveraging this extra protection.

Making a VPN the lazy way.

This was a lot easier than I remember it. I set one up years ago on an old OVH VPS and it was a nightmare, probably because I was still pretty green.

I set one up this afternoon on Scaleway. I chose Scaleway because they offer a package that works well with VPNs, higher guaranteed bandwidth than OVH (200v100Mbit/s), otherwise system specs don’t really matter too much. I didn’t choose their ARM offerings, went instead for the cheapo 2.99 monthly, dual core 2gb ram 50gb SSD.

After spinning up your new node, change the kernel. You can do this in their control panel instead of shell, just click on advanced and change the bootscript to x86_64 4.10.8 std #1 (stable). Reboot afterwards.

Then grab this bitchin script right here and install that. Don’t bother with editing the user beforehand, it’s just as easy to modify it afterwards if you want to. You can find it in /etc/ppp/chap-secrets. The PSK is in /etc/ipsec.secrets.

Just remember to service ipsec restart & service xl2tpd restart afterwards.

The best part of all this was I could connect to this natively in OSX without installing a client. Scaleway does offer an image with OpenVPN pre-installed if that’s your thing. The second best part is IPsec/L2TP is theoretically secure, the best kind of secure. Stay away from PPTP.

Getting an A on Mozilla Observatory

So I’ve done this with a few sites so far and figured it may be worth a write up. It’s actually a lot less daunting than it seems. Here’s where you can test it.

jamie.ie, blog.jamie.ie and hoarding.me aren’t done yet as they’re hosted on WP Engine, a managed wordpress host so I haven’t gotten around to it (yet). Feel free to test on welp.me or im.welp.me.

Assuming you’re already using nginx it’s just a case of adding a few lines to your default config. Lets start with the massive pain in the ass that is CSP. Kill me. I use a default that fails mozilla observatory, great start right? Don’t worry, we’ll pass everything else and get an A anyway so who cares. Most of these are necessary to run any sort of social site, facebook, google analytics, twitter etc. And the maxcdn bootstrap.


add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://pagead2.googlesyndication.com https://connect.facebook.net/en_US/sdk.js https://platform.twitter.com https://www.google-analytics.com; img-src 'self' https://syndication.twitter.com https://www.facebook.com https://www.google-analytics.com data:; style-src 'self' 'unsafe-inline' https://maxcdn.bootstrapcdn.com https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com https://maxcdn.bootstrapcdn.com data:; frame-ancestors 'self'; frame-src 'self' https://staticxx.facebook.com https://www.facebook.com https://platform.twitter.com https://googleads.g.doubleclick.net; frame-ancestors 'self'; connect-src 'self' https://apis.google.com; object-src 'self' https://pagead2.googlesyndication.com";

The next three you shouldn’t be failing to begin with.  CORS is the devil, so that’s skipped too. May god have mercy on your soul. Pinning keys is overkill, if you’re using Lets Encrypt it’s also a waste of time updating this every 3 months so don’t bother.

For HTTP Strict Transport Security;


add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";

For Redirection;


server { listen 80 default_server; listen [::]:80 default_server; return 301 https://$host$request_uri; server_name your-domain-name.tld; } server { server_name your-domain-name.tld; listen 443 ssl http2; listen [::]:443 ssl http2;

You can omit both server_name directives if you’re only serving one site.

For Referrer Policy;


add_header Referrer-Policy "no-referrer";

For X-Content-Type-Options;


add_header X-Content-Type-Options nosniff;

 

For X-Frame-Options, it’s set in your CSP which we’ll added in the first part (frame-ancestors ‘self’)

For X-XSS-Protection;


add_header X-XSS-Protection "1; mode=block";

Harden your SSL settings with this;


ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SH$ ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_stapling on; ssl_stapling_verify on; ssl_session_tickets off; ssl_ecdh_curve secp384r1;

Grab a beer and enjoy your A rating.