atomicules

Push propelled program tinkerer and picture maker.

Spamhaus SBL-CSS and Linode

I discovered, thanks to someone on the NetBSD mailing list, that I’d ended up on SpamHaus’s SBL-CSS list. After an initial panic/worry that I’d been compromised (I am pretty locked down, but there is some software I run that has that potential; I guess almost everything does) I was just about ready to let rip into Spamhaus for being unappointed internet police (which is kind of true) when I decided to get in touch with them on Twitter and they were actually really helpful.

It turned out they do actually cover my scenario in their docs (right at the bottom there is a note about Linode), but I either hadn’t noticed that bit or was looking in the wrong place.

Linode provided me with my own /64 straight away and then I “just” had to make use of that:

  • Added new DNS entries for a new domain pointing to a new ipv6 address
  • Updated my SPF DNS entry
  • Setup reverse DNS on Linode (which is a bit confusing as you can add multiple entries when you have a whole /64)
  • Edited /etc/postfix/main.cf and set myhostname to the new one
  • Just in case also /etc/myname
  • I had just been using ip6mode="autohost" in /etc/rc.conf, but had to go back to using a /etc/ifconfig.wm0 file and adding static inet and inet6 entries:

      inet 178.79.141.136
      inet6 2a01:7e00:e000:035b::1 prefixlen 64 alias
    

    I switched ip6mode to just host, but haven’t rebooted yet so that could be wrong. I just restarted the network and ifconfig shows both addresses - that’s good.

Hopefully that means I’m now sending mail from a “good” ipv6 address, although hard to know for sure as you need a ipv6 address to test against.

In an attempt to verify (by using CheckTLS’s TestSender) I realised I’d not actually got TLS setup properly for sending; I had sometime ago got around to using Let’s Encrypt for receiving, but never mentioned it here. All I really needed for sending was:

-o smtpd_tls_security_level=encrypt$
-o tls_preempt_cipherlist=yes$

Which I’d inadvertently left commented out in /etc/postfix/master.cf (after the submission and smtpd bit), although I tweaked some other bits in /etc/postfix/main.cf based on (the now out of date, but better than nothing) BetterCrypto guide.

As a result of that, I then ended up updating some TLS settings for my website (ciphers, TLS versions, DNSSEC, etc) to get my rating up; All futile really for a personal website, but oh well.

Upgraded to Jekyll 4

Somewhat recently I got around to upgrading to Jekyll 4. I don’t like to rush into things. There’s nothing visibly different on this site, but building is a bit quicker for me; On that note I’ve never understood the obsession of those folk with blogs with about ten posts total on them who spend ages finding a static site generator that will allow them to generate their site in 0.000001 seconds… only for them to never post again after making a post about switching their site to that; For what it’s worth Jekyll builds my site of around a thousand posts in about 14 secs now - even my busy schedule can afford to wait that many seconds.

As well as having to remove a lot of baseurl from posts and includes, upgrading was a bit of an arse as Jekyll now requires sassc-ruby even though I don’t use or care about this (perhaps I should… if I can ever be bothered to update the visual style of my site again).

As with all things that get re-written in another language to make them more portable, it actually ended up less portable as it doesn’t build by default on NetBSD. The main issue seems to be that it uses a -march=native flag:

-march=native causes the compiler to auto-detect the architecture of the build computer. At present, this feature is only supported on GNU/Linux, and not all architectures are recognized. If the auto-detect is unsuccessful the option has no effect.

Which is the very definition of non-portable. Fortunately someone had fixed in pkgsrc which saved me from figuring it out, but it was for Ruby 2.6 only so I then had to update Ruby as well which had knock on effects of rebuilding vim, etc… all of which takes me time to sort out.

Probably someone should make a PR against ruby-sassc to fix properly. If I have a spare moment ever…

Anyway, this is just a non-post to say I’ve upgraded to Jekyll 4, but that it makes absolutely no difference to you.

(Finally) Using https for my Fossil repos

Since I started using HAProxy there has been nothing stopping me from using TLS for my fossil repos apart from finding the time to do it; I suppose it’s not been that long since I migrated the bulk from github, even though it has been ages since I started hosting fossil.

I just needed to update my cert to include the fossil domain, tweak my haproxy.cfg to add a new backend:

backend fossil
	mode http
	option httpchk
	# This ones gives a 501
	http-check expect status 501
	server fossil 127.0.0.1:18080 check

(I am being lazy with my http checks)

and tweak the frontend section to route to this backend:

frontend https
	bind :::443 v4v6 ssl crt /usr/pkg/etc/haproxy.crt no-sslv3
	http-request redirect prefix https://%[hdr(host),regsub(^www\.,,i)] code 301 if { hdr_beg(host) -i www }
	reqadd X-Forwarded-Proto:\ https
	acl fossil-acl hdr_beg(host) -i fossil
	use_backend fossil if fossil-acl
	default_backend bozohttpd

And lastly, coming up with a crappy rc.d file so I can start fossil as a server:

#!/bin/sh
#
# $NetBSD: fossil
#

# PROVIDE: fossil
# REQUIRES: network

$_rc_subr_loaded . /etc/rc.subr

name="fossil"
rcvar=$name
command="/usr/bin/su -m fossil -c '/usr/pkg/bin/fossil server --port 18080 --localhost --https --repolist /home/fossil/repos &'"

load_rc_config $name

run_rc_command "$1"

Previously I was using fossil in http mode via inetd.

Could do with writing that a bit better, but it does the job for now.

Oh, actually that wasn’t “lastly”. The last thing I needed to do was update all the headers of the skins for each Fossil repo to use secureurl instead of the default baseurl:

<base href="$secureurl/$current_page" />

which was a little bit tedious (like how you can have login-groups with Fossil it would be nice to have a “skin-group” to set one skin across all repos).

Fossil is super nice for self-hosted stuff and personal projects, you really should try it if you haven’t; It would also be nice to for group projects, but it’s hard to argue against the Github ecosystem.

Yet Another HAProxy and Let's Encrypt post

It’s what the world needs.

I caved in and decided to use HAProxy in front of Bozohttpd so I could:

  1. Redirect http to https
  2. Redirect www to just the domain (only just fully finished this bit)

The fiddly bit with Let’s Encrypt and HAProxy is handling the renewal of the cert. All the posts I’ve found either do the simple, but reliable, approach of stopping a web-server, running a renewal using --standalone and then re-starting a web-server, or the slightly more advanced approach of using --standalone on a non-standard port with a HAProxy rule that passes through to it as needed. But why not use --webroot instead?

Since I used --webroot originally I just have a couple of cron entries that run:

/usr/pkg/bin/certbot renew --renew-hook /usr/local/bin/reload-cert.sh

The --renew-hook only gets called if the certificate is actually renewed. Where that script is:

#! /bin/sh
# TODO: Really, this should be a /etc/rc.d reload script
cat /usr/pkg/etc/letsencrypt/live/atomicules.co.uk/fullchain.pem /usr/pkg/etc/letsencrypt/live/atomicules.co.uk/privkey.pem > /usr/pkg/etc/haproxy.crt
export conf_file=/usr/pkg/etc/haproxy.cfg
export pid_file=/usr/pkg/etc/haproxy.pid
haproxy -f $conf_file -sf $(cat $pid_file) -p $pid_file -D

Which does the necessary bits of combining the two parts of the cert (I do like that bozohttpd doesn’t require this) and then doing a hot reload of the HAProxy configuration so that the HAProxy is serving the new cert.

Then in my haproxy.cfg I have a http frontend with these rules:

frontend http
	bind :::80 v4v6
	acl letsencrypt path_beg /.well-known/acme-challenge/
	acl http      ssl_fc,not
	http-request redirect scheme https if http !letsencrypt
	reqadd X-Forwarded-Proto:\ http
	use_backend bozohttpd if letsencrypt

Which re-directs all http requests to a https frontend unless they match the Let’s Encrypt path, in that case they pass through to the backend as http - this is important as the webroot plugin can’t work over https (which seems a bit counter-intuitive for Let’s Encrypt).

Then I have a frontend for the https stuff as follows:

frontend https
	bind :::443 v4v6 ssl crt /usr/pkg/etc/haproxy.crt no-sslv3
	http-request redirect prefix https://%[hdr(host),regsub(^www\.,,i)] code 301 if { hdr_beg(host) -i www }
	reqadd X-Forwarded-Proto:\ https
	default_backend bozohttpd

Which serves the actual Let’s Encrypt cert and also redirects the www prefix to the main domain. The backend is nothing fancy at all:

backend bozohttpd
	mode http
	# Since the check doesn't pass a domain it will 404
	option httpchk
	http-check expect status 404
	server bozo 127.0.0.1:10080 check

And this works lovely.

However, one VERY IMPORTANT thing to be aware of if you are using IPv6 and you starting seeing timeouts reported during renewals or dry-runs of renewals it is GUARANTEED to be due to your site not resolving over IPv6. There are many posts about this on the Let’s Encrypt forums and all of them start with that “definitely not being the problem” and end with “Oh, actually it was”. I too ran into this issue, but hadn’t realised as my IPv6 access had broken at home (and I hadn’t realised) and had also been broken on my server for months (and I hadn’t realised) even though I appeared to have an IPv6 address.


I’d originally only generated a certificate for atomicules.co.uk, but of course if I want to redirect www to my plain domain I also actually need the certificate to be valid for www as well. It took me a little bit to figure out how to do this. I basically relied on Bozohttpd’s virtual host support and created a /var/www/vroot/www.atomicules.co.uk directory (rather than to try to do further clever redirection in the HAProxy) for the sole purpose of serving up the acme-challenge stuff. Then with the above HAProxy setup this worked:

sudo certbot certonly --webroot -w /var/www/vroot/atomicules.co.uk/ -d atomicules.co.uk -w /var/www/vroot/www.atomicules.co.uk -d www.atomicules.co.uk --cert-name atomicules.co.uk

[EDIT: 2018-11-23] See: Adding a reloadcert command to /etc/rc.d/haproxy.

Redirecting From Http To Https With Bozohttpd

Following on from the previous post: I’m so slow/dumb sometimes. Of course it’s possible to redirect http requests to https with Bozohttpd, the very fact I’m running two instances of this makes this possible.

For the httpd (non-https) instance configure it with a different virtual root directory, e.g:

httpd_flags="-S bozohttpd -v /var/www/vredirectroot -M .html 'text/html; charset=utf-8' '' '' -M .xml 'text/xml; charset=utf-8' '' ''"

Then within that directory create the virtual host directory so you have a path like so:

/var/www/vredirectroot/atomicules.co.uk

And then within that directory just place a .bzabsredirect file:

sudo ln -s https://atomicules.co.uk .bzabsredirect

Restart that bozohttpd instance:

sudo /etc/rc.d/httpd restart

And “hey presto!” it works.


[EDIT: 2017-07-31] Spoke too soon. It’s too simplistic. It redirects just the path with the .bzabsredirect file is. So although http://atomicules.co.uk/ merrily redirected to https://atomicules.co.uk an existing blog post like http://atomicules.co.uk/2017/13/32/somepost.html just 404s. Poop. Ok, I think I’ll have to go back to duplicating http and https again for the time being otherwise I’ll break a load of links - well a handful. One thing .bzabsredirect will work for is redirecting www.atomicules.co.uk on it’s own, I’d just left that broken for now. I might take a look at HAProxy as I’m not moving off Bozohttpd.

Now Serving Https As Well

Since it’s 2017 and that; Didn’t want to rush into this. Thought I should finally enable TLS/SSL since it’s free. I’m not sure I entirely agree with the arguments for a site like mine (wouldn’t metadata be the biggest problem?), but it’s pointless trying to argue against the tide; One thing though: Zscaler, anyone who has had to browse through that realises that TLS/SSL isn’t bulletproof. I understand why that exists as a product, but, gah, as an end user it’s just horrible.

The EFF site will guide you down the certbot-auto route for NetBSD, which is silly as there is a py27-certbot package - just use that.

Bozohttpd works fine with Let’s Encrypt, the only issue is that it either serves https OR http, unfortunately not both at the same time. I haven’t yet figured out a way to redirect traffic between ports so that’s meant I’m effectively running two webservers at the moment as per this rc.conf approach. I.e:

  1. Duplicate /etc/rc.d/httpd to /etc/rc.d/httpsd.
  2. Edit and make sure to change name to httpsd and command so command is explicitly calling /usr/libexec/httpd
  3. Add a $procname=$name line (otherwise it’ll get confused between httpd and httpsd and think they are the same).
  4. Change required_dirs to $httpsd_wwwdir
  5. In rc.conf have both a httpsd=YES and a httpd=YES

Then I have the following entries in rc.conf for http:

httpd_flags="-S bozohttpd -v /var/www/vroot -M .html 'text/html; charset=utf-8' '' '' -M .xml 'text/xml; charset=utf-8' '' ''"

Whilst httpsd has these extras:

-Z /usr/pkg/etc/letsencrypt/live/atomicules.co.uk/fullchain.pem /usr/pkg/etc/letsencrypt/live/atomicules.co.uk/privkey.pem -z 'EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;'

The ciphers as advised here.

Since running two webservers isn’t ideal I think I’ll ultimately have to redirect all traffic with the firewall (or run a proxy I suppose?), but that is going to have to wait until I perform some server maintenance and finally switch from IPFilter to NPF (which I should be able to do now I’m running on KVM).


[EDIT: 2017-07-30] Note: It’s advisable to set procname in both /etc/rc.d files. I think otherwise the start up order matters (god knows how it gets confused, but I found setting procname in both worked).

Site Performance Improvements

I just couldn’t resist the temptation of trying to get my Speed Index below the magic 1000 number*.

The first efforts I made were eliminating any external requests where possible; in fact, I’d already started making a few tweaks as a result of Jacques Mattheij’s Fastest Blog post. The rest were made after testing on WebPageTest.

Changes made:

  • Hosted the Creative Commons image, used in the footer, locally.
  • Used my own search form, still passing off to DuckDuckGo, as opposed to using their iframe.
  • Provide gzipped versions of all files; And that is as far as I can go with my webserver, Keep-alive and Caching are not an option.
  • Moved styles inline; much to my chagrin it does improve things. Fortunately it is still just as easy to manage from my point of view as I’ve just moved the file from /styles to /_includes.
  • Moved the Google Web Fonts stylesheet to a <link> instead of using an @import; I’m not getting rid of Web Fonts as I don’t want to be ugly, plus it’d be a crime not to use Vernon Adams’ fonts.

Of course, I’m not really sure there is a lot of point in making a site that no one reads really fast to load, but nevermind.

* - This is with the default US server test though, so I imagine faster still from the UK, but I’m still not very mobile friendly; I care far more about Elinks than mobile browsers.

Indieweb - Automatically sending webmentions

In crowbar-ing in webmentions I realised that the syndication code I’d written need overhauling if I was ever going to support more than u-in-reply-to. In fact, so much so that the code in my note syndication post is now obsolete; rather than looping through the posts in the syndication method of each syndication class instance I’m doing the looping just once in the Rakefile and calling the correct syndication class as required.

That overhaul was time consuming and explains the lack of posts here this month - I’ve spent most of my time doing behind-the-scenes work - but things are much cleaner now; although as ever, there is always room for improvement: I am still only webmentioning u-in-reply-to, but it will be easier to add it in for other uses of webmention.

In the Rakefile, when looping through posts, I do this when I come across a new link post since last deploy date:

case yaml["type"]
	when "link"
		post_data["link"] = yaml["link"]
		@posse_pinboard.syndicate(post_data)
		#For the time being webmentions are only sent for link type posts that have this key
		if yaml.has_key?("u-in-reply-to")
			@webmention.send(post_data)
		end
	when "photo"
		#Different things
	#And so on
end

Simple. The actual sending of the webmention is just as easy. I’m using Nokogiri to find the webmention endpoint (As far as I know it will only ever be on link element):

def find_webmention_endpoint(post_url)
	#Should cache these? Probably not worth it.
	page = Nokogiri::HTML(open(post_url))
	webmention_link = page.xpath("//link[@rel='webmention']")
	if webmention_link.empty?
		fail NoWebmentionEndpoint
	else 
		begin
			#Should be only one. I guess it's always on a link
			webmention_endpoint = webmention_link[0][:href]
		rescue
			#Something bizarre going on
			fail NoWebmentionEndpoint
		end
	end
end

And lastly, sending the webmention is as easy as:

res = Net::HTTP.post_form(webmention_endpoint, 'source' => post_data["my_post_url"], 'target' => post_data["link"])

Just in case you’ve forgotten: my Jekyll Indieweb repository

RE: Testing Receiving Webmentions

Testing my cobbled together code for automatically sending webmentions. For the time being it is only for things I’ve marked as being u-in-reply-to. I need to overhaul my code to cover other uses for webmentions.

Testing Receiving Webmentions

This is pretty much a test post so I have something to target as I work on implementing support for automatically sending webmentions on deploy and (obviously) receiving webmentions via webmention.herokuapp.com; I’m far more bothered about being able to automate sending webmentions than I am about receiving them, but from a practical standpoint I can’t do one without the other.

I’ve decided to go with Voxpelli’s service in the first instance to make my life easier. I may ultimately write my own and properly host my own webmentions receiver; I have horribly tempting thoughts to see what I can achieve with Lua and Bozohttpd.

Some very brief thoughts on webmention.herokuapp.com

to try to flesh out this post a bit

  • It is graciously provided for free and takes all of a second to implement so I have no rights to complain at all, but…
  • It requires Javascript to display mentions. I do use Javascript on my Archive page, but in general I’m trying to be Javascript free (purely because I’m the biggest user of my website and I use Elinks a lot).
  • There’s no administration functionality, such as being able to review webmentions before allowing them to be posted or deleting spam webmentions.

These are the ten most recent posts, for older posts see the Archive.