atomicules

Push propelled program tinkerer and picture maker.

LINK: Tiny little tweak to PWman

As it says in the commit message, I'm not really sure why I continue to use this as a password manager, but I suppose it is part security through obscurity and part laziness (ain't broke, don't fix it).

This adds a rather nifty "ruler" below to password field to aid in visual selection of individual characters for the logins that ask for random characters:

Password:   djdhhfhfkjhfuwhuifbvcnotrealmdsnwehbweha
            1234567890123456789012345678901234567890

I've wanted to do this for ages and assumed, as ever, that since it is C it'd be too fiddly to do, but actually took all of five minutes.

Part of the pretending I know how to program in C series.

Finally Switched Netbsd From Xen To Kvm On Linode

Finally got around to it. Thanks to some tips from a fellow Linode/NetBSDer I could avoid almost all of the pitfalls:

  1. Ahead of migration edit /etc/fstab and change all xbd1.* to wd0.*. They were "1"s because XEN required a boot disk, but since that was going to be deleted with KVM I knew the disk would change to "0".
  2. Also before migration edit rc.conf and change xennet0 to wm0.
  3. And ideally before migration, but I forgot and only did after migration: Edit /etc/cgd/cgd.conf and change xbd1.* to wd0.* and rename /etc/cgd/xbd1e to /etc/cgd/wd0e.

And that's it. Was much less painful than I was anticipating. At the moment I don't have serial bootblocks so I don't have lish access, only glish, but come the next NetBSD update I'll correct that.

Netbsd Under Kvm On Linode

In a way this really doesn't need a blog post. It's not as fiddly as XEN was and if you are willing to just do everything through Glish it pretty much just proceeds as expected. But my concern was having to be reliant on Glish as until recently it didn't work on NetBSD Firefox (does now) and connecting via lish would give you a blank screen (not fun). But you can get both to work:

  1. Create three disks, 1x 1024 MB (ext) called "Rescue", 1x 1024 MB (raw) called "Install" and 1x the remainder (raw) called "NetBSD". An extra Rescue disk is required owing to the size of the image and unzipping it.
  2. Create a two configuration profiles, use Direct Disk, Full Virtualisation and turn off the FileSystem/Boot helpers. One has the NetBSD and Install disks mounted and is to boot from the Install disk in order to install to the NetBSD drive, the second is the final one which just boots NetBSD.
  3. Boot into Rescue mode with the Rescue disk first, Install disk second, etc.
  4. mount /dev/sda /media/sda and then cd /media/sda.
  5. Get the USB image, wget http://cdn.netbsd.org/pub/NetBSD/NetBSD-7.1/images/NetBSD-7.1-amd64-install.img.gz.
  6. gunzip it and copy to the install drive dd if=NetBSD-7.1-amd64-install.img of=/dev/sdb.
  7. Boot the install configuration and within 30 seconds get the lish console open and press space to drop out of menu. Select 4 for the boot prompt, enter consdev auto, then menu, then press enter to boot; If you want to read about consdev it's in man 8 boot_console.
  8. For whatever reason it fails to automatically launch the install so login as root and then type sysinst to run the installer.
  9. Go through the install, on the bootblocks screen accept default of "Use serial port com0" and then under "Set serial baud rate" select 115200 for the baud rate.
  10. Shutdown the install configuration and then boot the NetBSD one. Hey presto! Glish and Lish work.

I still haven't converted my XEN install to KVM yet though, but might soon


[EDIT: 2017-06-26] Wow, if you want to play with NetBSD 8.0 Beta then you might as well double the Rescue and Install images.

tmux: Kill all sessions except these

I use tmux an awful lot at work since it allows for a workflow where each session name references a separate piece of work that may need to be returned to (it's impossible to know upfront) and if it does it's far easier to pull up the previous session than start off from scratch.

However, the problem with this approach is it's really easy to rapidly accumulate sessions and to have no idea which ones I need to keep hanging around which is how I regularly end up with fifty sessions; in fact it's really not unusual for me to end up with over a hundred sessions.

This means every few weeks I need to purge my sessions. Until I've figured out a way to automatically look up the work id and see if it's closed (there is an API, but I've not figured if that kind of query is possible yet) I either have to close one at a time or hope there is just one session I want to keep so I can use tmux kill-session -a -t theoneIwanttokeep. Which is great if there is just one I want to keep, but invariably I know for a fact there are four or five I want.

So I finally wrote a simple script to do just that:

#!/bin/bash
for i in $(tmux list-sessions -F '#S'); do
  if [[ ! $@ =~ $i ]]; then
    tmux kill-session -t $i
  fi
done

Then I can call this as ./tkse sessiona sessionb sessionc sessiond, etc and it'll kill everything except those sessions.

This won't work if a session name has a space in it, but what kind of heathen does that?

Making use of Haskerdeux

I honestly thought that after I got Haskerdeux working again I'd not be doing much else to improve it, but lo and behold the whole "fix your own frustrations" thing kicked in and I ended up improving it so I could use it to work on todos of any date. Why? Because I wanted to be able to easily script adding a whole bunch of todos on various dates in the future.

I actually ended up doing some Ruby to (technically) Haskell scripting as follows:

require 'helpscout'
helpscout = HelpScout::Client.new("<api key>")

# Read in an array of "conversations" I had from a previous action
# I.e. a text file with one of something like this per line: https://api.helpscout.net/v1/conversations/326542115.json
conversations = File.readlines("conversations.txt").map(&:strip)

conversations.each do |conv|
    ticket  = helpscout.conversation(conv[43..-6])
    # DateTime.parse is clever enough to pick out date from a string like "something something needs to happen on 2017-03-05 so here's some text"
    todo_date = (DateTime.parse(ticket.preview)-1).to_date.iso8601
    `./haskerdeux #{todo_date} new "Need to do this before tomorrow [#{ticket.number}](#{ticket.url})"`
end

(This Helpscout Ruby gem is really good, by the way).

I.e. I had a list of HelpScout conversations that had been generated previously and I had actions based on these I needed to do on a certain date. So I iterated through each conversation, parsed the date from the ticket preview text and then added them to my Teuxdeux list via a Ruby shell/system call to Haskerdeux. Simple, hacky, nifty.

NPF so far

I am late to the NPF party, but let's face it: I don't use NetBSD because I want the latest and greatest, I use NetBSD because I want something I can (mostly) make work wherever I put it.

Since setting up a proper desktop machine with NetBSD I decided to use it as a playground for NPF with the ultimate goal of switching to NPF from IPF (IPFilter) where it actually matters to me most; IPF still works for me, but it seems to become more quirky with each NetBSD release.

It was not obvious - at all - to me from the documentation how to get NPF working with IPv6 and IPv4 (I have both at home, thanks excellent broadband provider!). With IPF there are two separate .conf files that are mostly independent from each other. Anyway, in case anyone else is struggling similarly I ended up doing the following and couldn't figure a way to completely avoid duplication:

# Simple npf.conf for Desktop with wired connection and IPv4 and IPv6

$ext4_if = inet4(bge0)
$ext6_if = inet6(bge0)

$services_in_tcp = domain
$services_in_udp = domain

procedure "log" {
    log: npflog0
}

group "external" on $ext4_if {
    pass stateful out final all

    pass stateful in final family inet4 proto tcp to $ext4_if port ssh apply "log"
    pass stateful in final proto tcp to $ext4_if port $services_in_tcp
    pass stateful in final proto udp to $ext4_if port $services_in_udp

    # Passive FTP
    pass stateful in final proto tcp to $ext4_if port 49151-65535
    # Traceroute
    pass stateful in final proto udp to $ext4_if port 33434-33600
}

group "external6" on $ext6_if {
    pass stateful out final all

    pass stateful in final proto tcp to $ext6_if port $services_in_tcp
    pass stateful in final proto udp to $ext6_if port $services_in_udp

    # Passive FTP
    pass stateful in final proto tcp to $ext6_if port 49151-65535
    # Traceroute
    pass stateful in final proto udp to $ext6_if port 33434-33600
}
group default {
    pass final on lo0 all
    block all
}

Attempts to have one group that would work on both IPv6 and IPv4 failed. Maybe it is possible somehow, but I sure as hell couldn't figure it out and the above does work.

There are some good tips at the bottom of the npfctl man page that you can use to test the rules are loaded and working:

  1. Use npfctl reload, not load otherwise you get a weird error of "npfctl_config_load: no such file or directory".
  2. Then use npfctl start
  3. Then use npfctl show which should list the rules.

Since I seemingly had it working on the desktop I had a whirl on XEN, which requires building your own kernel and then being able to publish the kernel somewhere you can download it from so you can boot it. This is more fiddly than it sounds when your web host is also your XEN NetBSD install. However, after doing all that it turns out NPF is broken on XEN on 7.0.2. I could fight it, but I've decided to just wait until 7.1.

Coming back to the desktop I was surprised to find that just having npf=YES in /etc/rc.conf wasn't enough to actually load the rules and start npf on boot - or at least that is what I thought. I started playing about with trying to explicitly call /etc/rc.d/npf reload and /etc/rc.d/npf start in rc.local, but then found it would produce an error because the interfaces didn't seem to be ready. After a bit more searching I found this: npf startup failure when using dhcpcd inet4 and inet6. The exact issue I was seeing.

Guess I'm waiting for 7.1 on the desktop as well!

Definitely feel fine with not rushing towards npf

Gradle and Java SSL debug

I hate Java and I hate Java build systems even more. This is why for the most part I always try to keep things simple and just use javac and java directly where I can. However, invariably if you are going to touch other people's Java you can't avoid things like Gradle and Maven. Gradle isn't too bad I suppose, but I spent hours and hours trying to figure out why I couldn't do this:

gradle run -Djavax.net.debug=SSL,trustmanager

this was with 3.2.1 and seemingly it should work. I did my default assumption: It must be me doing something wrong, but after finally running:

gradle run -Djavax.net.debug=SSL,trustmanager --debug

and inspecting the output I discovered those commands are not being passed through. However, this also gave me the solution to the problem: Copy the command from the --debug output (search for "Starting process 'command") and then paste that directly into the terminal where I could then insert -Djavax.net.debug=SSL,trustmanager and run it. Hey presto, another example of build managers just getting in the way. I wonder if gradle just has a command like gradle show to show the command that it would run?

Haskerdeux Neux

I always said that had I had a smartphone I would have stuck with Teuxdeux all along. There is something about it that just sits right with me, hence all my efforts to emulate it with Taskwarrior.

So it was no surprise I switched back when the opportunity finally arose. But I'm still keen to be able to accomplish as much as I can via the Terminal on NetBSD - I'll accept platform agnosticism over owning every byte of my own data. I'd stumbled across dmi3/teuxdeux in the meantime (wilderness years) which was really interesting to me as I'd spent a huge amount of time playing with curl trying to figure out the neux API when it was first (not actually) released and could never get it to work. Thanks to dmi3/teuxdeux I realised it must actually be possible to work with the new non-API.

The first mistake

After playing much more I finally realised that the whole (well, 98%) reason I couldn't get things to work was because I'd been misspelling "authenticity". I checked. Four years ago in my curl calls I was spelling it "authencity". Doh! I'm pretty sure I would have persevered had I realised.

The second mistake

What the hell, things still didn't work?! After much more confusion I then realised in the put request had "vi" and not "v1" for the URL. Typos have screwed me up a lot, but this is taking the biscuit.

The third mistake

I'll accept this one. The next reason why I failed to get this to work was because I'd not used the "-L" parameter (follow location) on curl when logging on. That was significant and I only ended up trying that after stumbling across this old blog post, Using curl with a web site secured by Rails Authenticity Token.

The last mistake

Since in my attempts to get this to work I'd long dropped back from Haskell to straight use of curl, the last mistake I'd made was a misunderstanding of the curl man page. I thought you could just use --cookie-jar for writing and reading cookies, but you can't you need to use it in conjunction with --cookie for reading and sending the cookies, --cookie-jar is just for saving cookies.

Mistakes I realised I'd made last time

After finally getting to the point where I could logon on make simple api calls on the command line I started changing the Haskell app to suit, still using Network.Curl even though very out of date, because it still (seemed to) work and the state of other options seemed incomplete and confusing as it was a few years ago. I also realised I'd done a lot of bad things originally, like recreating the curl object anew on each method so I improved that and passed about a curl object instead.

I had things "working" again with Network.curl, apart from I couldn't figure out what it was doing with cookies. As far as I could tell it seemed to be handling them in memory. The CurlCookie options worked and were necessary, but it didn't actually seem to be saving the cookies to a file anywhere. I ended up having to re-logon everytime.

Not a mistake, but a change

Even after that work-around though I ran into a road block with PUT requests. They seemingly should have worked via Network.Curl, but I eventually had to conclude they didn't.

I admitted defeat and went for straight system calls instead since I knew the command line worked. It doesn't make it spectacularly Haskelly, but it does mean it works.

Same, but different

So after a few months I'm finally back to the exact same functionality I had a few years ago. The only real differences are:

  • Uses boring system calls instead of a nice Haskelly way (but it is all the same deep under the surface, right?) and so doesn't have as a nice way of checking for returns like it did with 200 status codes when using Network.Curl.
  • You (and by that I mean "I") can only logon with netrc (which suits me).
  • It doesn't pass credentials all the time (because of the change in the API) which is a lot better, but it might mean when the cookies eventually expire you'd (I'd) have to reset the login by removing the cached auth token.
  • I can't be bothered to make it better (I had faint hope the first time, but no, it does what I want it to do)

Anyway, it's back alive again here: Haskerdeux.

There is no flashy fancy code to show off here, I just made it work again - that was achievement enough; The ironic thing about officially working in the Software/Technology industry is that I now have much less time for writing code than I did when I was "officially" a Mechanical Engineer.

cabal-install notes for NetBSD

Just a quick little post so in another three years time I'll be able to figure out how to do this again a bit more quickly.

If you are trying to use cabal with headers and things on none standard paths (e.g. pkgsrc) then you need to do:

cabal install curl --extra-include-dirs=/usr/pkg/include/ --extra-lib-dirs=/usr/pkg/lib --configure-option=CPPFLAGS=-I/usr/pkg/include/ --configure-option=LDFLAGS=-L/usr/pkg/lib

The important bit being the --configure-option flags as the --extra ones are documented and thus a bit more obvious.

Writing crap Elixir code from the point of view of someone who writes crap Erlang code

Woo, quite a long title, but I think that effectively explains what this is going to be about. There are already lots of good posts on Elixir from proper Erlang programmers. This isn't meant to be another one of them. It's just a collection of things I've noticed.

It is pretty much the same

Superficially there is no real difference (or benefit to be gained) between Erlang and Elixir code. Here's some Erlang code:

pad_to(Length, Binary_string) when length(Binary_string) < Length ->
    Padded_binary_string = "0"++Binary_string,
    pad_to(Length, Padded_binary_string);
pad_to(Length, Binary_string) when length(Binary_string) == Length ->
    Binary_string.

And the equivalent in Elixir:

def pad_to(length, binary_string) when length(binary_string) < length do
  padded_binary_string = '0'++binary_string
  pad_to(length, padded_binary_string)
end
def pad_to(length, binary_string) when length(binary_string) == length do
  binary_string
end

Apart from the wrapping of the code in def and end it looks pretty much the same.

You have to still use Erlang

Quite a bit. I was surprised how quick I came across the need to do this, but not all Erlang things have Elixir equivalents so you have to use Erlang code; However, at least this is seamless and painless.

Original Erlang:

Binary_string = hd(io_lib:format("~.2B", [Binary_number])),

Elixir port:

binary_string = hd(:io_lib.format("~.2B", [binary_number]))

defp is really neat

In Erlang you export only the functions you want:

-module(polyline).
-export([six_bit_chunks/1]).

six_bit_chunks(Encoded_polyline) ->
    six_bit_chunks_(Encoded_polyline, []).
six_bit_chunks_([Head | Rest], Chunks_list) ->
    Six_bit_chunk = six_bit_chunk(Head),
    %Add to Reversed_chunks
    six_bit_chunks_(Rest, [Six_bit_chunk]++Chunks_list);
six_bit_chunks_([], Chunks_list) ->
    lists:reverse(Chunks_list).

In Elixir everything that uses def is exported automatically. If you don't want something exported you use defp. This is really neat.

def six_bit_chunks(encoded_polyline) do
  six_bit_chunks_(encoded_polyline, [])
end
defp six_bit_chunks_([head | rest], chunks_list) do
  six_bit_chunk = six_bit_chunk(head)
  #Add to Reversed_chunks
  six_bit_chunks_(rest, [six_bit_chunk]++chunks_list)
end
defp six_bit_chunks_([], chunks_list) do
  Enum.reverse(chunks_list)
end

Indices are different

I don't know why, but indices are different. I don't think there is a clever reason. It just is.

In Erlang lists:sublist("words", 2, 3) returns "ord". And in Elixir Enum.slice('words', 2, 3) returns 'rds'; That is something else to be aware off the differences in single and double quotes; single quotes are charlists as per Erlang, whereas double quotes are for Elixir strings (something different).

Flipping order of args... and pipes.

In Elixir some functions have the order of the arguments flipped from how they are in Erlang. This threw me to start with and I thought it was to make Ruby programmers happy, but actually it's because of pipes.

In Erlang:

Eight_bit_chunks = lists:map(
    fun(Group_of_chunks) ->
        eight_bit_chunks(Group_of_chunks)
    end,
Five_bit_chunks).

In Elixir the list, etc is always first:

eight_bit_chunks = Enum.map(
  five_bit_chunks,
  fn(group_of_chunks) ->
    eight_bit_chunks(group_of_chunks)
  end)

This means you can then do things like this:

five_bit_chunks
|>  Enum.map(
      fn(group_of_chunks) ->
        eight_bit_chunks(group_of_chunks)
      end)

I.e. pull out and pipe five_bit_chunks into the Enum.map. Which doesn't look that impressive on its own but it means you can chain things together. Pipes are neat. My only quibble is that it is a step towards Haskell's esoteric symbols (I liked Erlang over Haskell just because it was a little easier to understand).

Phoenix

I am mid-way through porting my Erlang command line application to Elixir. My plan is to then use Phoenix to turn this into a web application so other people (ha!) can use it. I also have plans to improve it. Slow, long term plans. I mention this only to say that you can write Ruby without going near Rails and the same is true of Elixir and Phoenix; and both are something you should do.

These are the ten most recent posts, for older posts see the Archive.