Podman can pretend to be Docker

And thus make docker-compose work quite nicely!

I was recently trying to run a big docker-compose.yml setup using podman-compose and it took ages to spin up and then didn’t work. But apparently you don’t have to, because podman system service serves up a Docker API replacement which then can transparently be used by docker-compose.

First, start up podman system service --time=0 unix://$PWD/podman.sock somewhere to have podman serve up the api.

Then, run DOCKER_HOST=unix://$PWD/podman.sock docker-compose up to start up docker-compose as usual.

I didn’t play around with it for too long, but it seems to work quite well at a glance. Much faster and prettier (colourful output, better separation between docker-compose vs container output).

Apparently this has been a thing since 2021 and this article has some info on how to start this up as a systemd service. (I personally did not want to do that because I want podman to run as non-root/my own user, but even that should be possible using systemctl --user.)

Running stable diffusion on an integrated AMD gpu (on Arch)

Lots of qualifiers, let’s see how this works.

First of all, here’s a much more detailed article, this just builds on that: https://www.gabriel.urdhr.fr/2022/08/28/trying-to-run-stable-diffusion-on-amd-ryzen-5-5600g/

The following AUR packages seem to be needed:

Sub-dependencies noted because I use makepkg only, would be easier to install with an AUR wrapper/helper and/or more disk space.

Installing torch goes as follows, assuming a venv at ./venv:

$ ./venv/bin/pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2

Note that 5.2 seems to be the latest version available, I am not sure if that is compatible with rocm 5.4, which is what the AUR has.

And then you can check if torch thinks it has gpu acceleration support:

# should print 'True', otherwise something is up
$ HSA_OVERRIDE_GFX_VERSION=9.0.0 ./venv/bin/python -c 'import torch; print(torch.cuda.is_available())'

And then you can try to do the usual dance, e.g. using stable-diffusion-webui:

$ HSA_OVERRIDE_GFX_VERSION=9.0.0 TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2' ./venv/bin/python launch.py --precision full --no-half
Python 3.10.8 (main, Nov  1 2022, 14:18:21) [GCC 12.2.0]
Commit hash: 685f9631b56ff8bd43bce24ff5ce0f9a0e9af490
Installing gfpgan
Installing clip
Installing open_clip
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments: --precision full --no-half
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [7460a6fa] from /home/luna/t/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt
Applying cross attention optimization (Doggettx).
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings:
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

Don’t forget ./venv/bin/ in front of python, that tripped me up a couple times.

See https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs for more tips and tricks about the webui thing.

Generating a batch of 10 images, this is what I got for gothy witch in the forest with a large pointed hat with stars on it, 4k, detailed, gloomy, cat sitting on a tree branch:

a collection of images generated by stable diffusion for the prompt "gothy witch in the forest with a large pointed hat with stars on it, 4k, detailed, gloomy, cat sitting on a tree branch"

Which shows that writing prompts is not trivial either, I suppose.

And that might be how you can get this to run on a laptop. Worked for me. The only thing left is to contemplate the ethics/motivations for replacing artists with “ai” magic.

I prefer humans, so here are some whose art I like:

Veganizing Sohla El-Waylly's Cinnamon-date sticky buns!

(The best dough I have ever made m’self! A.k.a. math buns!)

Original recipe: https://www.bonappetit.com/recipe/cinnamon-date-sticky-buns / https://www.hellosohla.com/recipe-search/cinnamon-date-sticky-buns

These are the best cinnamon buns I have ever made, and the softest home-made dough I have ever tasted. Really flaky and soft like store-bought stuff, only made at home and without preservatives and other magics.

You will need a bunch of time, because the dough rises once overnight in the fridge and then 1-2 hours before baking the next time.

Author’s note: Bon Appétit had a big implosion of structural racism in 2020, with most of their POC and women on-video people leaving, including Sohla El-Waylly. If you want to support them, you can find those people doing things on their own now! For example, Sohla does regular things with NYT Cooking and elsewhere now.

The recipe

This is the adjusted recipe as I am making it, read below for how I arrived here from the original one.

Ingredients

dough:

filling:

glaze (optional):

Instructions

  1. tongzhang
    • add 100g of yoghurt and 20g of flour to small pot
    • whisk at medium heat until it has a sticky pudding-like texture
      • be careful here, this burns quickly if not paying attention
    • take of heat immediately, transfer to different container to not cook it further!
  2. make the dough
    • warm remaining yoghurt and 6 tbsp oil to 36°C (about body temp, should not feel hot or cold)
    • combine with yeast and 50g sugar
    • mix dry ingredients (remaining flour, baking soda and salt) for dough
    • combine yoghurt mix, dry ingredients and tongzhang; mix until a ball forms (will look very wet at first)
  3. knead & fold
    • put on unfloured surface (dough should still be wet and sticky)
    • knead, pushing away from you and then pulling it back towards you; until a smooth ball forms (~3 minutes)
      • if too sticky, oil hands (and maybe surface)
      • don’t add flour!
    • roll into 20cm square, fold twice in half to get a 10cm square
    • roll back into 20cm square, fold again like before to 10cm square
  4. let it rise!
    • oil bowl with 1 tbsp. oil
    • add dough, turn over to coat in oil
    • cover tightly with cling foil
    • rest overnight for 8 hours (up to 1 day)
  5. make the filling

    • ???

    simpler alternative:

    • cover rolled out with a nice layer of butter
    • sprinkle with sugar and cinnamon to taste
  6. prepare the dough

    • punch down dough! (ouch?)
    • roll out dough on unfloured surface to 20cm square
    • fold twice again to 10cm square
    • roll out to 30cm square, about 0.6cm thick
  7. assemble the buns!

    • grease round cake tin/ceramic thing with remaining 1 tbsp oil (9 buns, ~25cm diameter should work)
    • cover rolled out dough with filling from above; leave 1cm or a bit more of free space at the farthest end (to “close” the buns at the end)
      • if using the date filling, sprinkle remaining sugar on top
    • roll into tight roll from closest edge to the one with the free space
    • cut off ~1cm at each end for clean finishes with serated knife
      • i like to bake the ends with the regular buns and then use them as samples
    • slice into three equal sections, then each section into three buns
      • wash knife in between with hot water for cleaner cuts
    • place buns side-down into greased cake tin
    • cover with aluminium foil or lid, let rise for 1-1.5 hours until doubled in size
      • should spring back and leave a small indent when poked
      • buns should be pretty close together in the tin and touch so that they grow taller when baked
  8. bake ‘em!!

    • pre-heat oven to 180°C
    • bake covered for 20 minutes until “puffed, pale and mostly set”
    • remove lid/cover
    • bake 15 minutes uncovered until golden brown for soft and squishy buns

Add glaze if you want one; and then serve still warm! Eat soon (same day) for the nicest texture!

Modifications

I like to change things a bit, or somethings have to:

The Story

The original recipe uses buttermilk and eggs, which are not a thing in vegan baking. So we need to replace them somehow.

First, Tangzhong to the rescue! Tangzhong is basically a little pudding made from some of the water and flour in the dough, which somehow binds liquid and keeps it in the dough, making it softer and fluffier than it might otherwise be. This replaces the egg in the recipe, which seems to play a similar role. (I am guessing here, )

The flakyness comes from using a good amount of oil in the dough, which is worked in after kneading by covering the dough in oil and then folding it a couple of times. (Twice!)

And finally the buttermilk we can just replace with vegan yoghurt, which seems to work well enough.

Wait! Actually the most troublesome part is converting the measurements from metric to grams and then calculating how much liquid and flour we are using in the tangzhong. This amount we then need to subtract from the original recipe because tangzhong does not add or take away any mass from a recipe, it just uses some of the liquid and flour in different ways.

So you’ll need a scale as well because this seems to be a rather precise thing.

The Math

Replace egg and add tongzhang while keeping hydration the same.

Hydration: $water\ from\ wet\ ingredients / flour = hydration$.

Original recipe hydration: ~60%: $(180g + 56g * 0.75) / 375g = 0.592$ (180g + 56g * 0.75 = 222g\ water content)

One large egg = 56.7g, has ~75% water content => $56g * 0.75 = 42g$ water that needs to be added into the recipe.

Our hydration: $42g / x = 0.6$ => $42g / 0.6 = 70g$ flour for tongzhang taken from total amount of flour. (Thank you to J for listening to me talking this through, which made me realize this calculation made no sense at all.)

When we use tongzhang we need the hydration to be 75% instead, so we need to increase the amount of water in the recipe.

So: $375g\ flour * 0.75 = 281g\ water$ to get 75% hydration. So we need $281g - 222g = 59g$ additional water in the recipe. But because we don’t have the egg we also need 42g more to add the water content of the egg back in?

Finally we usually use 5-10% of the flour in the recipe for the tongzhang and the tongzhang is 1 part flour to 5 parts liquid. So $375g * 0.05 = 18.75g$ of flour to $93.75g$ of water, or up to $375g * 0.10 = 37.5g$ of flour to 187.5g of water.

It might be simpler, quoting from the same article again:

I’ve now made this standard slurry often enough that this is what I use for any yeast recipe calling for between 3 and 4 cups of flour: 3 tablespoons (23g) of the flour in the recipe + 12 cup (113g) of the liquid.

Remember, you’re using flour and liquid from the recipe, not adding extra flour and liquid! Take that into account when you’re measuring out the remaining flour and liquid for the dough.

Our recipe has 375g = 3 cups of flour, so maybe let’s go with that. Now only to decide whether to add the additional liquid for the higher hydration in water, milk or yoghurt. Let’s go with yoghurt because that is what we are using anyways.

Resources

Accessing files using ephemeral containers

I have a service running in Kubernetes on my server that needed some tending debugging, so here’s how that went and the little trick that was needed for it.

Usually I’d just use kubectl exec and be on my way, but there were three issues with that:

  1. I wanted to access the db and sqlite was not installed in the container
  2. The service runs as a non-root user, so no installing anything in addition
  3. The live container was pretty locked down, containing only busybox and my binary

Ephemeral containers to the rescue!

Here’s what was necessary in the end:

$ kubectl --context live debug -it numblr-c67cd998f-69ktm --image=alpine:3.15 --target=numblr --share-processes`

# try accessing the data of the live pod
/ # ps aux
PID   USER     TIME  COMMAND
    1 1000      0:25 /app/numblr -addr=0.0.0.0:5555 -debug-addr=0.0.0.0:6060 -db=/app/data/cache.db -stats
   67 root      0:00 sh
   76 root      0:00 ps aux
/ # ls /proc/1/root
ls: /proc/1/root: Permission denied

# replicate the live user
/ # apk add --no-cache shadow && useradd --home-dir / --shell /bin/sh numblr && apk del shadow
...

# run sqlite as that user for access!!
/ # apk add sqlite
...

/ # su - numblr -c 'sqlite3 /proc/1/root/app/data/cache.db'
SQLite version 3.36.0 2021-06-18 18:36:39
Enter ".help" for usage hints.
sqlite> .schema
CREATE TABLE feed_infos ( name TEXT PRIMARY KEY, url TEXT, cached_at DATE , description text, error text);
CREATE TABLE posts ( source TEXT, name TEXT, id TEXT, author TEXT, avatar_url TEXT, url TEXT, title TEXT, description_html TEXT, tags TEXT, date_string TEXT, date DATE, PRIMARY KEY (source, name, id));
CREATE INDEX posts_by_author_and_date ON posts (author, date);
CREATE INDEX posts_by_author_and_id_and_date ON posts (author, id, date);

And off I was, with access to the live db and able to run some EXPLAIN QUERY PLANs and so on!

Two key things here:

That’s it, have a nice day!

Say you suddenly want your own little Kubernetes cluster...

… say no more, I have one now, here’s what I did to make that happen!

Whyy?

Well. Because.

At least mostly. Here are some reasons:

Basically I want to see how difficult it is, if it is viable for me. And maybe it might even be nicer than the manual server management that I have so far.

I may also have been at KubeCon 2022 virtually this last week and may have wanted some more experience before we restructure things at more. But the “automated the setup” aspect is pretty tempting as well.

Some constraints

As usual, I have some odd choices and constraints:

How?

In short, the cluster runs k3s, the ports are published on a dedicated internal ip using kube-vip; and all of this took one evening of frustration and a morning with a clearer head to figure out.

Note: ⚡ This setup has been running for a few hours only, I don’t know if it is secure enough, fast enough, whatever enough. But it runs and now I can try some more things. ⚡

k3s

k3s is a small-ish Kubernetes distribution deliverd as one binary and I think easier to set up than a full blown cluster. E.g. the database is a simple sqlite database and there’s no fun distributed etcd stuff to setup. Running k3s server gives you a Kubernetes cluster.

I installed k3s from the AUR. Do note that the k3s-bin package is lacking the k3s-killall cleanup script, which you will need to clean up the networks, iptables stuff and containers that k3s starts up. I pretty much always ran the k3s-killall script when I was changing IPs, testing network settings and similar stuff like that. If in doubt, get a clean (network) state by running k3s-killall.

The devil is in the (network) details. Because I don’t want to publish ports on my public IP, which is not something that k3s seems to support out of the box.

I tried a lot of things, perused the docs over and over, and in the end just used kube-vip which was linked in a GitHub issue. But here are some of the steps that I tried:

In short, mostly I don’t understand how networking works properly, iptables even less; but there was also something odd going on.

kube-vip

I had enough.

Luckily, kube-vip seemed to be doing what I want: Allowing me to specify an interface an IP address that ports will be published on. In particular, instead of the builtin LoadBalancer implementation that comes with k3s we use kube-vip, setup according to their docs.

  1. Add a new internal ip for Kubernetes: sudo ip addr add 192.168.0.101 dev lo

    Note the IP and the interface (lo) are custom and you can choose what you need there, e.g. for my live setup it has a different IP and listens on the actual network interface. (But because it’s a different IP ports on that IP are not reachable from the outside.)

  2. Setup some permissions that kube-vip needs when running as a daemonset:

    curl https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml

  3. Configure kube-vip to listen on your IP and interface:

    • fetch the image: k3s ctr content fetch ghcr.io/kube-vip/kube-vip:$KVVERSION
    • generate the daemonset: k3s ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip manifest daemonset --interface lo --address 192.168.0.101 --inCluster --taint --controlplane --services
    • place the generated daemonset in /var/lib/rancher/k3s/server/manifests/kube-vip-daemonset.yaml

    For me this generated daemonset looked something like this:

      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        creationTimestamp: null
        labels:
          app.kubernetes.io/name: kube-vip-ds
          app.kubernetes.io/version: v0.4.4
        name: kube-vip-ds
        namespace: kube-system
      spec:
        selector:
          matchLabels:
            app.kubernetes.io/name: kube-vip-ds
        template:
          metadata:
            creationTimestamp: null
            labels:
              app.kubernetes.io/name: kube-vip-ds
              app.kubernetes.io/version: v0.4.4
          spec:
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                  - matchExpressions:
                    - key: node-role.kubernetes.io/master
                      operator: Exists
                  - matchExpressions:
                    - key: node-role.kubernetes.io/control-plane
                      operator: Exists
            containers:
            - args:
              - manager
              env:
              - name: vip_arp
                value: "false"
              - name: port
                value: "6443"
              - name: vip_interface
                value: lo
              - name: vip_cidr
                value: "32"
              - name: cp_enable
                value: "true"
              - name: cp_namespace
                value: kube-system
              - name: vip_ddns
                value: "false"
              - name: svc_enable
                value: "true"
              - name: address
                value: 192.168.0.101
              image: ghcr.io/kube-vip/kube-vip:v0.4.4
              imagePullPolicy: Always
              name: kube-vip
              resources: {}
              securityContext:
                capabilities:
                  add:
                  - NET_ADMIN
                  - NET_RAW
            hostNetwork: true
            serviceAccountName: kube-vip
            tolerations:
            - effect: NoSchedule
              operator: Exists
            - effect: NoExecute
              operator: Exists
        updateStrategy: {}
      status:
        currentNumberScheduled: 0
        desiredNumberScheduled: 0
        numberMisscheduled: 0
        numberReady: 0
    

  4. Finally, run k3s server with the correct parameters to use that IP:

    $ k3s server --node-ip 192.168.0.101 --advertise-address 192.168.0.101 --disable traefik --flannel-iface lb --disable servicelb
    INFO[0000] Starting k3s v1.23.6+k3s1 (418c3fa8)
    INFO[0000] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s
    INFO[0000] Configuring database table schema and indexes, this may take a moment...
    INFO[0000] Database tables and indexes are up to date
    [...]
    

With all of this we disable traefik and the default load balancer (servicelb), and replace the latter with kube-vip which listens on the IP and interface we have specified.

ufw

We do need to expose one port on our firewall if we want to manage the cluster from the outside using kubectl:

$ sudo ufw allow 6443 comment k3s
Rule added
Rule added (v6)

Now we can deploy things using kubectl or other tools that talk to Kubernetes. (As long as we copy the config to the machine from where we want to run kubectl: https://rancher.com/docs/k3s/latest/en/cluster-access/)

caddy (and exposing ports locally)

To route things to the outside, we can expose a port to the host on the IP we have set up:

# Expose on port 15555 to the host.
#
# With this setup the port is then routed outside the host in some other way,
# e.g. using Caddy outside of Kubernetes.
apiVersion: v1
kind: Service
metadata:
  name: numblr
spec:
  selector:
    app: numblr
  ports:
    - protocol: TCP
      port: 15555
      targetPort: http
  type: LoadBalancer

See https://github.com/heyLu/numblr/blob/main/kubernetes/deployment.yaml for the full deployment including that Service, which you can deploy using kubectl apply -f https://github.com/heyLu/numblr/blob/main/kubernetes/deployment.yaml.

And finally, we can expose this port using Caddy:

example.org {
    reverse_proxy 192.168.0.101:15555
}

Why like this?

I have regular services running already and Caddy set up, so I just want to add services to that for now. In the future I might play with setting up traefik + cert-manager so that subdomains and certificates are exposed automatically, replacing Caddy completely.

What now?

Now I have it running live, serving a staging instance of numblr that has a copy of the live database. It runs okay so far, but there seem to be some wrinkles I have to investigate still.

What’s neat is that I can now say kubectl apply -f kubernetes/deployment.yaml when I want to deploy a new version, and I don’t have to do a little scp + ssh manual service restart dance. And I can add new services in the same way, only having to tell caddy that there’s a new port to proxy on some new domain.

I think that’s pretty nice, let’s see how it turns out.

Edit from the future: A week later I am pretty happy so far. Better tooling, easy deployments, easy (and fast) access to logs, … And a really nice way to debug things, using ephemeral containers. Quite the nice workflow so far, even when used in the home.

Fin

That’s it! Have a nice day, I have some flowers to plant now.

A few notes on coffee making

So I’ve been making fancy coffee for a little more than a year two years (?!) now, here are some quick tips and tricks I’ve learned.

Quick note: These are my experiences and are what I enjoy, written more opinionated than I actually am. Please do ignore me if your experience does not match up! If someone tells you they know how things are and you are doing things wrong – they are likely wrong and probably not fun to be around!

Timing is everything!

Timing! After months, this one really helps me.

Sod the snobs

Some things I am doing:

Over time I’ve discovered that when I make espresso + mjölk things I usually don’t need sugar. With filter (+ mjölk) I usually want sugar, but sometimes less.

Keep at it

Have fun

That’s what makes it enjoyable for me, and keeps me making elaborate coffee things. Having these odd coffees with friends is also really fun for me, because they tell me fun things about what the coffee tastes like, often much more interesting things than I can taste.

For more, see the other things I have written about coffee-related topics.

Setting up a tiny friendly VPN using WireGuard

Recently, I wanted to play Stardew Valley with a friend. However, said friend lives a while away and thus we don’t have a LAN. But now I have a VPN, and we can play together, no matter where we are! (Sadly Stardew Valley co-op mode does not work on mobile, that would be even neater.)

screenshot together

With all of the things below, I mainly followed the instructions on the ArchLinux wiki. There was some fiddling required, but all-in-all this was the work of an afternoon to set up.

Setting up ufw

For some reason I did not have a firewall running on my server yet. That’s a bit irresponsible and was mentioned on the ArchLinux wiki, so I did set it up:

$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
<REDACTED>/tcp             LIMIT IN    Anywhere                   # SSH port
443/tcp (WWW Secure)       ALLOW IN    Anywhere
Anywhere                   ALLOW IN    192.168.0.0/24
<REDACTED>/udp             ALLOW IN    Anywhere                   # WireGuard
80/tcp (WWW)               ALLOW IN    Anywhere

Note the <REDACTED> ports, those are non-standard ports for both SSH and WireGuard, redacted for some security by obscurity here. Do be careful with that SSH port though, because you can lock yourself out of your own server. Luckily I did not.

However, my server needed a restart to make these settings take effect. Not sure why, but that’s what it needed.

Setting up the WireGuard server

As we only want to connect to each other, not provide a full VPN, this is the config file for WireGuard. The keys were generated using the instructions on the wiki.

$ sudo cat /etc/wireguard/wg0.conf
[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = SERVER_PRIVATE_KEY

# if the server is behind a router and receives traffic via NAT, these
# iptables rules are not needed
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE

[Peer]
# client1
PublicKey = CLIENT1_PUBLIC_KEY
PresharedKey = SERVER_CLIENT1_PSK
AllowedIPs = 10.0.0.2/32

[Peer]
# client2
PublicKey = CLIENT2_PUBLIC_KEY
PresharedKey = SERVER_CLIENT2_PSK
AllowedIPs = 10.0.0.3/32

(Note that all values for PublicKey, PrivateKey and PresharedKey have been redacted, you’ll need to fit the actual values in if you want to replicated it.)

Setting up WireGuard clients

Network Manager has built-in support for WireGuard, which is pretty neat. Here’s how to connect a client.

  1. First, set the network name (wg0) and private key for your client that is used to encrypt all traffic to the VPN server:

    network manager overview

  2. Secondly, set your IP address in the VPN to 10.0.0.x and use network mask 32.

    Gateway will be marked yellow but works fine when left empty.

    network manager ipv4 settings

  3. And finally configure the actual server (called “peer” here) you are connecting to. “Public key” is the public key of the VPN server, “Preshared key” is another secret you’ll get from the server for an additional layer of security.

    “Endpoint” is the host and port of the VPN server, written as host:port.

    network manager add peer

You should then be able to save and activate the connection. You should now be in the VPN and able to connect to other clients in the network using 10.0.0.x IP addresses.

Testing the setup

And then your VPN should be ready!

Playing Stardew Valley!

And with that we could play Stardew Valley together! One player is hosting the farm and the game, and others connect to it using the game hosts IP address in the VPN.

![screenshot host]()

![screenshot player join]()

(TODO: I know those screenshots are missing. I did play as pictured above, I just forgot to take the screenshots and then did not do it and now I want to get this post out.)

Conclusion

All in all this was much simpler to set up than expected! I am kind of amazed this is even possible to set up in a reasonable amount of time for someone who does not do sysadmin on a regular basis.

I repaired my bike's "Nabenschaltung"!

I now know they’re called hub gears, but that makes no sense to me.

A while back I forgot to fix the bar that makes my backpedal break work to the frame of my bike. That was… no fun, because when I tried breaking the next day it whacked into the bike frame and got stuck. I could not drive anymore at all because my back wheel would not spin at all anymore.

A few days later I found a fix that temporarily “solved” this problem using a hammer. My bike was driving again!

But it was driving weirder. I felt like my back wheel was wiggling much more along its axis and it often did not want to switch between gears properly. It just felt very fragile, but I was scared of trying to open the hub gear and what I might find in there. Or if I could ever put it back together.

But come this weekend, I have taken it apart entirely, I had a nice afternoon walk to my local hardware store, cleaned the parts, and will put it back together tomorrow.

What helped me was nlzs youtube channel. I found several videos detailing how to recenter the axis to fix the wiggling, how to take the hub gear apart, how to clean it, and how to put it back together. All in nice short videos that succinctly show how to do all this with clear images and tips for the trickier parts.

So good!

And to further help matters I even found technical drawings of the inner workings of my exact model!

[Update: And it even worked. My bike has been humming along nicely ever since. I had to fix a wobbly bike pedal, but nothing related to the hub gear.]

Getting coffee in germany

I may have gotten quite deep into coffee in the last year.

And thus I had a problem: Given that my information is mostly from english-speaking regions, when I’m lucky from the UK, and when I’m unlucky from the USA. Most shops and suppliers that are talked about there are at best impractical here, so what do I do? As an aside, I try to avoid amazon if possible, so the easy options there are out.

This is my little collection of tips and shops that I found useful so far.

First, your city may have local roasters and coffee shops that sell coffee beans and equipments! I found some locally and now have “my own” roaster from whom I get most of my coffee.

If you know me and you are interested, feel free to ask which roasters and café’s I’ve been to!

Given that I want to avoid amazon, I had to relearn a bit how to find things On The Internet. Funnily enough I found https://idealo.de quite helpful. I can sometimes find shops there that are difficult to find using regular search, and they let you search by product name.

To find local specialty cafés I found https://europeancoffeetrip.com/?s=Germany to be quite helpful, they listed a few cafés that I did not find via searching and have some pictures of the cafés they list.

Shops that deliver using DHL (because “Packstationen”):

Oh, and if you want certain Timemore products (like their scale), you may be better off using https://aliexpress.com rather than amazon. I ordered my scale there for half the price that was shown on amazon. It took a few weeks, but arrived without issue.

And that’s it, so far! I’ll update this when I find new interesting things.

(Last updated: 2021-11-03.)

A week with Neovim

So I’ve been trying out Neovim again. I’ve been interested a few times in the past, but when setting up a new laptop I decided to try it out for good.

In the end I was inspired to try again by lots of README’s for Vim plugins mentioning it and by an unrelated post somewhere that also mentioned Neovim.

So here I am, a little more than a week later, with regular vim removed from all my systems to properly get used to it. It’s been fun!

What’s been especially fun was finally making my setup reproducible again by putting the various Vim plugins into my regular dotfiles as git submodules.

And also, my fresh Neovim config is much shorter than the fun collection of random things my .vimrc was before (which was 219 lines of things accumulated over 10+ years, most of not used anymore for a long time):

-- looks
vim.api.nvim_command('colorscheme peachpuff')
vim.api.nvim_command('set number')

-- highlight character that was jumped to correctly after lightspeed motions (https://github.com/ggandor/lightspeed.nvim/issues/66#issuecomment-952888479)
vim.cmd[[highlight Cursor ctermfg=NONE ctermbg=NONE cterm=reverse]] 

-- fast navigation with fzf
vim.api.nvim_set_keymap('n', '<Leader>q', [[ <Cmd>b#<CR> ]], {noremap = true})
vim.api.nvim_set_keymap('n', '<Leader>b', [[ <Cmd>Buffers<CR> ]], {noremap = true})
vim.api.nvim_set_keymap('n', '<Leader>e', [[ <Cmd>Files<CR> ]], {noremap = true})
vim.api.nvim_set_keymap('n', '<Leader>s', [[ <Cmd>Ag<CR> ]], {noremap = true})

That’s all of it. Some colorscheming so that it works on terminals with a light background [^1], a fix/thing for one of the plugins I use, and fun keybindings for fzf.vim which I use to switch navigate around quickly.

Do I like neovim? Yes! Is it much different than vim? Not that much. Do I want to switch back? Not at the moment.

I think the nicest thing overall about Neovim is that it has nicer defaults, e.g. when pasting from elsewhere it seems to pick that up automatically and paste things without indenting all over the place.

There’s probably lots more things to configure and play around with, but so far I haven’t needed to, and that has been quite nice.

[^1]: Siderant: Why do some programs need extra config for that, it’s a bit annoying.

Help! I need a ʒaɪf!

So, you just watched James Hoffmann doing another glorious oh no face and naturally you need this in your life.

However, you need subtitles, an acceptable file size, and more. Let’s get to it!

You download your video using youtube-dl, possibly in a more sensible file-size (--format '[height <=? 720]') and then also the subtitles using --write-auto-sub --skip-download.

And then comes the magic, which will result in fun things like this:

Which was generated by calling ./gif-it.sh 3:06 3:13.5 Coffee\ Substitutes\ -\ Tasted\ and\ Explained-KArQ3mBzWC4.mp4 ooh-no.

Behold gif-it.sh:

#!/bin/bash

set -eof pipefail

START="$1"
END="$2"
INPUT="$3"
OUTPUT_BASE="$4"

# reencode include subtitles (also cuts to size)
mpv --start="$START" --end="$END" --sub-font-size=70 "$INPUT" -o "${OUTPUT_BASE}.mp4"

# mp4 with sound
#mpv --start="$START" --end="$END" "$INPUT" -o "${OUTPUT_BASE}.mp4"

# gif
ffmpeg -i "${OUTPUT_BASE}.mp4" -filter_complex "[0:v] fps=12,scale=480:-1,split [a][b];[a] palettegen [p];[b][p] paletteuse" "${OUTPUT_BASE}.gif"

# smaller mp4 of gif
mpv "${OUTPUT_BASE}.gif" -vf format=yuv420p -o "${OUTPUT_BASE}.gif.mp4"

A few notes:

  1. I could not figure out how to make ffmpeg output a GIF and video, so the first step is a reencode to render the subtitles. It also cuts the video to size, because we only want that part anyways.
  2. The filter pipeline is interesting, if impenetrable to debug for me if things break. See the post by Giphy below for a detailed explanation.
  3. Finally there’s an MP4 of the GIF for smaller file sizes.
    • Note that here we have -vf format=yuv420p, because Firefox does not play yuv444 videos, which was what mpv selected by default when converting from the GIF.

Speaking of file sizes, for the 8.5 second GIF we get the following:

$ ls -lh ooh-no.*
-rw-r--r-- 1 luna luna 3.2M May  5 17:45 ooh-no.gif
-rw-r--r-- 1 luna luna 195K May  5 17:45 ooh-no.gif.mp4
-rw-r--r-- 1 luna luna 760K May  5 17:45 ooh-no.mp4

Not too bad, especially the .gif.mp4, and given that I am not a GIF professional.

That’s it, enjoy, this time with sound:

Resources:

And as for the ʒaɪf monstrosity/awesomeness, see https://t.numblr.net/pbsideachannel/status/341708073540399105 and https://www.youtube.com/watch?v=bmqy-Sp0txY from PBS Idea Channel.

Student illustrators envision post-pandemic New Yorker covers

I loved the one by Katrina Catacutan.

(Nitter link here: https://nitter.cc/tropical_toxic/status/1385698382589698048.)

Via waxy.org.

How to get that damn screen recording

tl;dr: wf-recorder -et -c h264_vaapi -d /dev/dri/renderD128 -g "$(slurp)" --audio=alsa_output.pci-0000_00_1b.0.output_analog-stereo -f "$HOME/$(date --iso-8601=seconds).mp4"

Say I want to do a screen recording on a desktop using Sway & Pipewire.

It’s difficult. It seems that sway does not support recording single windows, only the entire desktop, even with obs-studio-git installed.

For additional fun I want to record the audio of my desktop, e.g. content from a browser.

So let’s try wf-recorder? That works, but its audio support had a massive snag for me: The -a (or --audio) flags are behaving very particular:

A minimal working example:

$ wf-recorder --audio=alsa_output.pci-0000_00_1b.0.output_analog-stereo
# or
$ wf-recorder -aalsa_output.pci-0000_00_1b.0.output_analog-stereo

The specific audio source string comes from running pactl list sinks | grep Name.

And my final incantation with screen selection and hardware acceleration:

$ wf-recorder -et -c h264_vaapi -d /dev/dri/renderD128 -g "$(slurp)" --audio=alsa_output.pci-0000_00_1b.0.output_analog-stereo -f "$HOME/$(date --iso-8601=seconds).mp4"

This allows me to turn on my laptops monitor, put the window I want to record there, and then select that section of my desktop to record from.

I could not get that to work in obs-studio running on Sway. However, it works really nicely in GNOME when obs-studio is nudged to actually run on Wayland using QT_QPA_PLATFORM=wayland obs.

However, obs-studio seems to use much more resources, so the incantation is still the best way to go for me.

Phew. Happy screen recordings!

Resources:

Hosting a Jitsi

After having a few times of not-quite-working video in the main https://meet.jit.si instance, I had a try at hosting it myself.

(I am always on a mobile 4G connection, which… makes such things interesting.)

My server runs Caddy for the web server, which then proxies to a little congregation of services running behind it.

Originally I looked at the jitsi-meet AUR package, but it seemed a bit too cumbersome, so I went with docker-compose.

Following the instructions at https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-docker mostly worked, but there were a few snags:

In addition, I enabled authentication so an account and password is required to start new meetings.

As for Caddy, I am using the following configuration:

your-domain-here {
  header /libs/*.js Cache-Control "public, max-age=604800, immutable"

  request_header -Cache-Control

  reverse_proxy localhost:8000
}

Note the Cache-Control header manipulation: I could not get Jitsi to enable caching for everything on its own, so I am having Caddy do it. I could probably cache more resources, but the JavaScript files are the heavy ones (>1mb) and make things fast enough for now.

And that’s it! You now have a working Jitsi setup!

Is it working better though?

Maybe. Subjectively yes, but the video will still not update sometimes for a bit while it catches up to the network.

Pipewire works!

It does! It’s pretty neat, even at version 0.3, and it makes this whole “setting up JACK” thing actually possible for me.

As you may recall, getting JACK to work is a bit of a trial on a good day. So much so, that I did not use it at all, even after getting it to work, because turning it on pretty much means turning off PulseAudio [^1].

So, I didn’t.

Enter Pipewire. (🎉)

It installs pretty neatly on ArchLinux:

  1. basics: pacman -S pipewire pipewire-alsa pipewire-pulse pipewire-jack
  2. drop-in JACK support: AUR/pipewire-jack-dropin (without it it would be pw-jack $my_jack_program)
  3. systemctl --user start pipewire pipewire-pulse
  4. the giggles!

(The wiki all the details if that does not work.)

You now have pipewire up and running. Applications should pick it up, but if they don’t restart them once and you should be good to go.

The awkward

There are bugs. Here are some I have encountered:

There are probably more bugs.

But! But:

The awesome

You can now run any JACK application without setting up anything (no qjackctl, no carla, nothing!)!

You can reroute your pipe organ to your Very Serious meetings and play them a little song.

You can put a reverb or a delay on all incoming sounds, making those same meetings much more entertaining.

There are kind of no limits. You start up carla or catia, reroute audio at will, and then off to the giggles again.

Fin

So, pretty neat! It’s still quite buggy sometimes, but also quite awesome. I really like it, and the exciting part is that it makes me want to do more music things, actually record something, just play around, …

🎶

[^1]: Yes, I know it’s possible to combine them, but it was still a hassle to get to work every time. It’s possible, but was too much for me.

git status as a todo reminder

You know those TODOs and FIXME left throughout your projects, never to be uncovered again and only to be wondered about by librarians far into the future?

No? Maybe just me.

Anyways, since a few months I made my git status shortcut remind me about them:

$ git st
## main...origin/main
 M explorer.go
explorer.go:    // FIXME: restrict paths to only boards/
static/board.js:  // FIXME: potential confusion because query overrides config, even if config is more recent
static/board.js:      // TODO: notify about override in yaml somehow
static/board.js:// TODO: support refreshing automatically
static/board.js:            // TODO: always display timestamps in utc
static/board.js:        // FIXME: support display of all datasets (only displays one so far)

(Note that TODO and FIXME would be highlighted in red.)

This works using a git alias and a bit of shell fun:

# ~/.gitconfig
[alias]
	st = "!st() { git status --short --branch . && (git grep -E --color 'TODO|FIXME' -- :^Makefile || true) }; st"

This greps for TODOs and FIXMEs in your repository, ignoring Makefile because that contains a target that greps for TODOs as well sometimes.

Helps me a lot, maybe it helps you too!

grep -B1 'transaction started' /var/log/pacman.log | grep -v ALPM | grep -E '\-S|-U' | grep -v asdeps

Find packages that have been explicitely installed. Wait…

Oh yes, as I am writing I realize that pacman does this already (as it should) using pacman -Qe.

Ah well, learn something new every day or something. Coming up, neat git tricks and finally writing down what I do (differently) when installing ArchLinux. (I forget every time.)

pacman -Qi | grep -E '^Name|^Installed Size' | sed -E 'N;s/\n/ /g;s/Name\s+:\s+([-_a-zA-Z0-9]+)/\1/;s/Installed Size\s+:\s+(.*)/\1/' | grep -E '[0-9]{3,}\.[0-9]+ MiB

Display packages taking up more than 100MiB on your system.

Most of this incantation is concerned with bringing both the name of the package and its size onto one line. The last grep then filters out the big packages. So you could adjust the size threshold by adjusting that last regular expression.

tofu frikaseé

frikasée

rice

uh, that is the recipe, and i seem to not be finishing articles, so this gets posted as is now. (which is already 2 months later.)

How to make your OLED display 10x slower!

I have been working on some Arduino-related things recently, which involved displaying some things on an OLED display (this one). However, I was very disappointed by how slow it was.

It took about 30ms per character to update, which was really disappointing. Knowing not much about actual hardware, possibilities, or how slow/fast it should be, I just continued on, assuming I’d just have to work with it.

It turns out I do not.

The display in question is an I2C display, connected on the A4 and A5 pin on the Arduino Uno board I’m using. And initially, it really was connected on those two pins, because that is what I found in the code sample from the seller:

#include <U8x8lib.h>

// this will be hella slow, mind the _SW_ constructors
U8X8_SSD1306_128X32_UNIVISION_SW_I2C u8x8(/* clock=A5*/ A5, /* data=A4*/ A4);

The problem is that it was doing the I2C communication in software, which apparently is really slow. The funny thing is that on the Arduino the pins actually have multiple purposes, and A4 and A5 in particular are also special I2C pins, then called SDA and SCL respectively.

And using a different configuration for the u8x8 display library, it suddenly was about ten times faster with updates. About 30ms for a “full screen” update now, and 2ms for smaller updates.

// _much_ faster
U8X8_SSD1306_128X32_UNIVISION_HW_I2C u8x8;

So… if you want your OLED display to be really slow, do as I demonstrated. Otherwise, mind the u8x8 constructors with SW (i.e. software) in the name, and stick with the ones that use the actual capabilities of the hardware.

Some more info:

See you soon (ah well, let’s see) with more Arduino things, voltage ladders and USB-MIDI firmware tricks! Oh my! :)

pacman -Qi $(pacman -Qu | cut -f1 -d' ') | grep -E '^Name|Installed Size' | grep -E '^Name|( [0-9]{2}(\.[0-9]+)?\s+MiB)'

Find the biggest packages that are about to be updated. (Actually, find the packages that need to be updated that are bigger than 10MiB on disk.)

Getting USB audio interfaces to work on (Arch) Linux

I’ve been having problems with getting external USB audio interfaces to work on my computer. I have a guitar and wanted to plug it into various external interfaces, and it didn’t work properly.

In the end, the trick was to tell jack to use a different audio interface for input than the one for output.

This is how the config looks in cadence:

A screenshot of 'cadence', showing the configuration of my audio devices

In the above, note that “Input device” is hw:1, whereas “Output device” is hw:0. For me, that’s the trick that worked in the end.

I found this via https://answers.bitwig.com/questions/1134/how-do-i-correctly-setup-audio-under-linux, which describes how to set up audio for Bitwig Studio, but really works for more software, I think.

In addition to this, set up realtime audio, by installing the realtime-privileges package and adding yourself to the realtime group. (Don’t forget to also set it in jack/qjackctl/cadence.

Hints:

(Note to self: The image above was taken using scrot -s, and then compressed using tinypng.com. convert image.png smaller.webp also worked, made the image go from ~21000 bytes to ~10500 bytes. But tinypng brought it down to ~8500 bytes.)

The little german tofu compendium

I’ve started making notes on the tofu that the various supermarkets in germany sell. Please pass along any other hints you might have.

Last updated on: 2020-01-24.

ldd scide | sed -nE 's/.*=> ([^ \t]+) .*/\1/p' | grep -v '^not$' | xargs ldd 2> /dev/null

I used this to find out which libraries that a certain program (or library) is linked to have missing dependencies. In my case I ran a partial update for icu, and that broke lots of things (mostly qt), and then I had to run out and find out which libraries still used the old version of icu.

Replace the argument to the ldd call at the beginning with the program you’re trying to get running again.

Combine with:

And this is how it looks:

$ ldd `which emacs` | sed -nE 's/.*=> ([^ \t]+) .*/\1/p' | grep -v '^not$' | xargs ldd 2> /dev/null
/usr/lib/libtiff.so.5:
  linux-vdso.so.1 (0x00007ffeba3ea000)
  liblzma.so.5 => /usr/lib/liblzma.so.5 (0x00007ff2be27b000)
  libjpeg.so.8 => /usr/lib/libjpeg.so.8 (0x00007ff2be1e6000)
  libz.so.1 => /usr/lib/libz.so.1 (0x00007ff2bdfcf000)
  libm.so.6 => /usr/lib/libm.so.6 (0x00007ff2bde4a000)
  libc.so.6 => /usr/lib/libc.so.6 (0x00007ff2bdc86000)
  libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007ff2bdc65000)
  /usr/lib64/ld-linux-x86-64.so.2 (0x00007ff2be774000)
/usr/lib/libjpeg.so.8:
  linux-vdso.so.1 (0x00007ffff6bf1000)
  libc.so.6 => /usr/lib/libc.so.6 (0x00007f7fab9e8000)
  /usr/lib64/ld-linux-x86-64.so.2 (0x00007f7fabc43000)
/usr/lib/libpng16.so.16:
  linux-vdso.so.1 (0x00007ffe28792000)
  libz.so.1 => /usr/lib/libz.so.1 (0x00007f45b2fa6000)
  libm.so.6 => /usr/lib/libm.so.6 (0x00007f45b2e21000)
  libc.so.6 => /usr/lib/libc.so.6 (0x00007f45b2c5d000)
  /usr/lib64/ld-linux-x86-64.so.2 (0x00007f45b343e000)
/usr/lib/libgif.so.7:
  linux-vdso.so.1 (0x00007fff3350a000)
  libc.so.6 => /usr/lib/libc.so.6 (0x00007f09c8faf000)
  /usr/lib64/ld-linux-x86-64.so.2 (0x00007f09c93c8000)
/usr/lib/libXpm.so.4:
  linux-vdso.so.1 (0x00007ffc3ab9f000)
  libX11.so.6 => /usr/lib/libX11.so.6 (0x00007fcdaa3a2000)
  libc.so.6 => /usr/lib/libc.so.6 (0x00007fcdaa1de000)
  libxcb.so.1 => /usr/lib/libxcb.so.1 (0x00007fcda9fb5000)
  libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fcda9fb0000)
  /usr/lib64/ld-linux-x86-64.so.2 (0x00007fcdaa73d000)
  libXau.so.6 => /usr/lib/libXau.so.6 (0x00007fcda9dac000)
  libXdmcp.so.6 => /usr/lib/libXdmcp.so.6 (0x00007fcda9ba6000)
/usr/lib/libgtk-3.so.0:
  <...>

Happy debugging random stuff… 🙄

Interesting StrangeLoop 2018 talks

What do I want from [... tools]

(Where tools are things like git-annex and upspin, and possibly other things like ssb.)

How debuggers (might) work (and an article about ballet/learning)

I’ve read this neat series of articles a little while ago, and it shows how to get a simple home-grown debugger up and running:

But there was also another article, about writing an strace-equivalent (and quite a bit more) using ptrace (2): (It gets into emulating syscalls for different systems and faking/disallowing system calls even.)

And switching topics, entirely, here’s an article about returning to ballet and learning. I really liked that one, because I also struggle trying to learn new things, which seemed to be easy when I was a bit younger. But often when I try to practice something, I do learn a little bit, even if it doesn’t really feel that way.

Quick, hand me some ssh security options!

Here you go:

Host *
  PasswordAuthentication no
  ChallengeResponseAuthentication no
  HashKnownHosts yes
  UseKeyChain no

Found in this article on how to get a more secure macOS installation.

sqlite: Unable to open database file

Today geary couldn’t start up, saying the database was potentially corrupted. There was an option to just rebuild the database file, but as I like my history of emails and didn’t want to download all of them again, I gave up.

At first. :)

As it turned out, there was a weird empty geary.db-journal file right alongside the database, and it was owned by root. As geary (and sqlite) couldn’t access it, it couldn’t do any database modifications. (According to https://sqlite.org/tempfiles.html, that file is used for “rollback journals”.)

In the end, I just deleted the file (geary.db-journal), and geary worked again.

Here’s my approximate debugging session:

# start geary, see an error

# hm...
$ geary --debug
[...]
unable to open database [...] 'CREATE TABLE CorruptionCheckTable (text_col TEXT)'

# hm, let's try it manually
$ sqlite3 ~/.local/share/geary/<my-account>/geary.db
sqlite> CREATE TABLE CorruptionCheckTable (text_col TEXT);
unable to open database

# oops?  no idea what's going on, let's do something else/more important

# ...

# search for "sqlite create table error unable to open database file" on the web

# find http://fredericiana.com/2014/11/29/sqlite-error-open-database-file/

# file permissions?!
$ ls -la ~/.local/share/geary/<my-account>/
-rw-r--r-- 1 lu   lu   374161408 Feb  2 14:43 geary.db
-rw-r--r-- 1 root root 0         Feb  1 10:18 geary.db-journal

# why is there an empty root-owned file here?

# don't know, let's delete it
$ sudo rm ~/.local/share/geary/<my-account>/geary.db-journal

# and it works again

"You get to have your own narrative"

“But their narratives are not yours. And just like they get to have their own, YOU get to have YOUR own. And your narrative is that they are unreasonable, selfish, and frankly being jerks about this whole thing.

Those two narratives are not mutually exclusive. They exist simultaneously. Nothing needs to be done about them; they simply are.” – Laura at captainawkward

“Reasons are for reasonable people” – bostoncandy at captainawkward

Magic, games, oh my!

Which is to say, another time for a few links to interesting stuff.

Resuming videos in mpv

By quitting mpv with Shift+Q instead of just q you can continue watching where you stopped later.

The --save-position-on-quit option enables this by default. However, the position will only be saved when quitting, not when skipping to the next file or anything else.

I'm writing a compiler?!

Or: An Incremental Approach to Compiler Construction

I appear to be writing a compiler. It’s not entirely by accident, but it’s not entirely intentional either. I’ve been interested in compilers for a long time, but I haven’t learned assembly, so most of my experiments have been compiling to different languages (like C), and interpreters.

But now I’ve found a paper that builds a compiler in 24 incremental steps:

An Incremental Approach to Compiler Construction, by Abdulaziz Ghuloum

It’s about writing a simple compiler for a sizable subset of Scheme (up to an interpreter), to raw x86 (32 bit) assembly.

I think it’s awesome!

My compiler targets x86_64, because that’s what my laptop is running. For now that seemed to amount to using the wide registers whenever pointers/memory locations are in play, which means at several steps I got loads of segfaults.

I’m currently at step seven, which introduces heap allocation, and with that several types that aren’t representable with just a stack. (Or maybe they are, but not without many difficulties, at least in this compiler.)

It’s fun, it’s challenging, but it is also doable, which means that I (mostyl) understand what it is doing, without having much experience with assembly. I did know Scheme and compilers, though, but in theory this should also be possible if it were a compiler for Python. (In fact, I might port the first few sections to Python, so people will look at that, and then hopefully continue with the rest of the paper.)

A taste of the compiler

So, how do things work?

In the beginning, there was nothing. Except, the paper starts with just numbers, just a function returning a numeric constant:

// integer.c

int scheme_entry() {
    return 42;
}

What kind of assembly does that generate?

$ gcc integer.s -S
$ cat integer.s
    .file   "integers.c"
    .text
    .p2align 4,,15
    .globl  scheme_entry
    .type   scheme_entry, @function
scheme_entry:
.LFB0:
    .cfi_startproc
    movl    $42, %eax
    ret
    .cfi_endproc
.LFE0:
    .size   scheme_entry, .-scheme_entry
    .ident  "GCC: (GNU) 7.2.0"
    .section    .note.GNU-stack,"",@progbits

While that may seem like a whole lot of gibberish, what’s important are three lines:

// scheme.s

scheme_entry:          ; define a label called "scheme_entry"
    movl $42, %eax     ; move the number "42" into the "eax" register
    ret                ; return

And now, we can call it from C:

// driver.c

#include <stdio.H>

extern int scheme_entry();

int main(int argc, char **argv) {
	int val = scheme_entry();
  	printf("%d\n", val);
  	return 0;
}

And sure enough, it prints 42:

$ gcc scheme.s driver.c -o scheme
$ ./scheme
42

To me, that was amazing! I didn’t know a thing about assembly, and here I was, writing a compiler, which about a week later supported ifs and let. (That was yesterday, today it’s learning about heap allocation.)

It’s not all sunshine

Here come the caveats.

You will have a much easier time if you can work in a 32 bit x86 environment, because that’s what the compiler in the paper targets. With a little more effort and debugging segfaults it is possible to port it to 64 bits, but if you can get a compiler for a 32 bit environment, I’d recommend that.

The paper is rather sparse in places. For example, it explains how to encode numbers and shows code for that, but then it leaves you on your own to write the code for booleans, characters and the other types. It’s doable, but it seemed a bit daunting to me at first. However, I think that was very much a didactic decision to not include all the code, because that will require you to think about what you’re doing.

I also think I may have found a few mistakes, but I’m not entirely sure about those, partially because I work on a differrent architecture. But it works.

Resources

Want to write your own compiler as well? Great, here are some pointers to helpful things:

First and foremost, the paper itself.

I have boring opinions about tech

I have boring ideas about tech.

Or, not really, but somewhat.

Let’s talk about JavaScript. I don’t admire it, I just use it. When I need to make a request to the server in a web page, then I just write out plain old JavaScript without any libraries to make an XMLHttpRequest. It’s not that pretty, but most of the time it works pretty flawlessly.

I know what to do. I create it, I .open it, I know how to send query parameters (the new URL object is pretty neat, but pasting the query string together also works for me), I know how to send POST requests (FormData is your friend).

Similarly, when I need a web service, I’ll write some Go. Usually without any libraries. In fact, one of the things I like about it is that i don’t have to. The standard library is fine (I’d say excellent), and it works pretty well for these things.

And again, I know my way around. I know the interfaces, I know how to lock something with a Mutex should I need to, and so on. It’s almost boring, except that it’s not.

The list goes on. I don’t hate PHP, I like plain old Ruby, I appreciate ObjectiveC a bit, I can write Python, I’ll even dive into C if I have to, or sometimes if I just feel like it.

And still, I have also written non-trivial amounts of code in Haskell, Rust, some Erlang, and plenty of Clojure. I also like those languages, but I’m not religious about them. Sometimes I want them, but often I am fine with this tooling I have around.

I just like plain old boring technology. In fact, I find it exciting, and interesting that these kinds of boring things lead to interesting results. There’s something there, I think.


So what’s my point here, then?

My point is that I don’t like fighting over technology. Highly opinionated pieces about how language X or library Y are the best thing ever, or worse, how Z is the worst thing ever – these kinds of articles aren’t really interesting to me. Sometimes I’ll read them, but I prefer reading different things.

I love hearing about language X helped solve a specific problem. Or how library Y helped the author develop a solution much faster. I like hearing about techniques and trade-offs.

So maybe that’s what I’ll write about next. Maybe I’ll write about Go and how its standard library helps me with all kinds of things. Or how I write these small (or sometimes rather big) pieces of JavaScript that enhance simple web pages. Or I’ll finally write Saved by the Shell.

Or not. After all, I have boring opinions about tech.


(Aside: I think what I’ve written here is already too opinionated. But alas, it was fun to write, and it does express a bit how I feel. So here we go.)

git clean --dry-run -x

Clean up untracked files from the repository, including files ignored by git (via .gitignore). Rerun --force instead of --dry-run to actually remove these files. There is also an interactive mode (-i).

Cooking without a stove

I recently moved to new flat, and didn’t (and still don’t) have a full kitchen for a week. What I did have was a simple water cooker and a mixer. As it turns out, you can cook delicious things with them, you just have to kinda know what kind of recipes to look for.

In short, vegan and raw. At least, that’s how I ditched eating out (which I can’t afford doing more than maybe twice a month at most), and “cooked” neat things.

So, here is my current bag of tricks:

And if you have a fridge (which I finally have), you can make fancy cakes, deserts, …

My favourite recipes so far (german):

I continue to use just-add-boiling-water type meals sometimes, but I am much happier since I discovered I can still cook fancy things without a kitchen.

Oh, and I am also planning on making some raw vegan cakes & cookies. Very curious how they will turn out. I might make some of the following:

Simulating network latency

tc can be used to simulate network latency on Linux.

$ sudo tc qdisc add dev eth0 root netem delay 100ms

Now every packet that is sent or received via eth0 is delayed by 100ms. It is possible to add a random offset on top of it, as follows:

$ sudo tc qdisc add dev eth0 root netem delay 100ms 10ms

tc can do a lot more. (StackOverflow was helpful as well.)

If the second tc qdisc add command does not work, you might want to use tc qdisc change instead. And when you’re done tc qdisc del is your friend as well!

Prices for cinemas in Leipzig

(Reduced prices, sometimes there are cheaper days, overtime & other things cost extra.)

Links are to the pricing pages, with the programs in parentheses.

Strangely calming

When your child says “Why am I not allowed to do this thing?”

Instead of defaulting to “My house, my rules!”

Try actually communicating a legitimate reason, because children pick up on subtlety and on context and on the unspoken messages, and it’s better to teach children lessons like “You should think really hard before taking on new responsibilities” and “It’s important to show consideration for the needs of the people with whom you share a living space” than lessons like “It’s okay for people to demand your absolute obedience so long as you’re dependent on them for survival.”

Via kriegsrhetorikinspace.tumblr.com. (Look at it, the post has more examples and discussion.)

And now, a video with a cat who thinks it’s a dog.

Vegan breakfast/brunch thingies in Leipzig

Row, row, row your boat...

(I arrived here after thinking about origins of the word “relationship”.)

(And yes, I’m that simple. And I like it that way.)

nmcli networking off

Kürbissuppe

1/2 kürbis
1 zwiebel
2 gr knoblauchzehen
2 kl karotten / 1 gr karotte (eher weniger)
1 kartoffel (?)
1 st ingwer (1-3cm)

1 dose kokosmilch (400ml?)

- gemüse anbraten
- mit gemüsebrühe auffüllen
- köcheln lassen bis alles gar ist
- pürieren
- kokosmilch unterrühren
- würzen
- JUHU!

Names of *the frog* (OTGW)

In Over The Garden Wall (which you should see, probably), the first episode starts with Greg enumerating rejected names for the frog:

“Wait, wait a second […]”

At the end of the first episode, the frogs name is “Wirt”, which is also the name of Greg’s brother, who will henceforth be called “Kitty”. (Which is also my name, by the way.)

“What, maybe I’ll start calling you ‘Candypants’.” - “Woah, yeah!”

The kind of joke I like ...

… and you don’t even have to discriminate against anyone for it.

@sauro on Twitter:

OMG IT FINALLY HAPPENED! THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG

(via @_katsel)

"You could dump the boyfriend and get a cat. It wouldn’t do any chores, but least the cat would be cute and hang out with you sometimes."

Captain Awkward is awesome!

Looping videos from YouTube

(And from lots of other places, see youtube-dl --list-extractors for a list.)

# Download a video
$ youtube-dl https://www.youtube.com/watch?v=aZvDe3box70

# Loop the video
$ mpv --no-video --loop=inf --end 3:47 'Shaban - Ungleichung-aZvDe3box70.mkv'

Note the --loop=inf parameter. And --end <timestamp> is also very helpful.

Also, don’t forget that mpv can play any video that’s downloadable by youtube-dl directly! (But we can’t use that for looping, because then it would download the file repeatedly.)

git ls-files | xargs cat | entropy.rb | sort | tail -n20

One of our engineers came up with a useful script to grab all unique lines from the history of the repository and sort them according to entropy. This helps to lift any access keys or passwords which may have been committed at any point to the top.

That’s about what the commandline above does.

Here’s entropy.rb:

#!/usr/bin/env ruby

def shannon_entropy(s)
  d = {}
  s.each_char do |c|
    d[c] ||= 0.0
    d[c] += 1
  end

  res = 0.0
  d.each_value do |v|
    freq = v / s.length
    res -= freq * (Math.log(freq) / Math.log(2))
  end

  res
end

if __FILE__ == $0
  $stdin.each_line do |line|
    e = shannon_entropy(line)
    puts format("%.4f\t%s", e, line)
  end
end

The comment is from a Hacker News thread about a recent disclosure of (very few) private repositories on GitHub.

Another comment in the same thread points out that Shannon Entropy was used for that, which I then ported to Ruby.

And now, you can search for “interesting” lines in your repository. Have fun with what you find! :)

(Todays) Adventures in Clojure

I’ve been writing a bit of Clojure again today, here are a few things I’ve learned today.

tl;dr: Look for middlewares in ring-defaults. ring-middleware-format is neat. Use cider in Emacs, and try again (and again).

Learning an ecosystem

It’s still hard to get started. I know Clojure (the language) well enough, but learning the tooling is much more difficult for me. I’ve tried to write simple APIs before, but the problems were similar each time.

Documentation

It seems that documentation for Clojure libraries is hard to find on the web. ring and compojure both have generated docs, but they are simply listings of the namespaces and the symbols in them, without top-level examples.

So what I mostly did was a combination of reading examples, glancing at the source code of different libraries (ring-defaults, compojure-api, ring-middleware-format, …), and failing to get anywhere and trying again a few days later.

(I should have been using ciders support for displaying documentation more, but that wouldn’t have helped with discovering which libraries to use.)

Middlewares?!

There’s a bewildering choice of ring middlewares to try. ring itself brings a lot of them with itself, but there are other useful ones that come from other places.

However, finding them is mostly a matter of luck, I think. I’ve started out with ring-defaults, but it doesn’t do content negotiation and too much other things.

So now it’s just the following:

(def api
  (-> handlers
      (wrap-restful-format :formats [:edn :json :yaml-in-html])
      wrap-keyword-params
      wrap-params))

Where wrap-params and wrap-keyword-params come with ring itself, and wrap-restful-format does content negotiation.

To wrap-restful-format uses the Accept and Content-Type headers to decide how to interpret requests and responses. In your code you simply set :body to some data, and wrap-restful-format will handle the conversion.

How did I learn of wrap-restful-format? I stumbled upon it while trying out compojure-api, which in turn I randomly found by searching for “compojure” on clojars.

I haven’t yet found a good way to catch errors. There are some middlewares for that, but I want one that does content-negotiation, and I don’t know if any support that.

Also, I haven’t yet found out how to selectively respond with HTML if requested, and otherwise API data. That would be very helpful for API endpoints that should also have a UI.

Neat things

This is mostly an aside, but both compojure-api and Nightlight are neat projects. With compojure-api you get automatic documentation for your API, and can even try it out there easily. Nightlight gives you an IDE in the browser. In theory that’s really cool, but it seems to be lacking for documentation support at this point.

Assorted Emacs tips

By default, the macros from compojure get indented very strangely, but put-clojure-indent can help. For example, to indent the GET macro properly, use the following:

(put-clojure-indent 'GET '(:defn))
;; ... and so on for POST etc.

Here (:defn) is an indent spec which allows properly indenting even complex macros.

Another thing that often tripped me up were how comments are indented. A single ; comment is indented to the side. This is the default and when using ;; instead the comment stays at the indentation level of the code surrounding it.

Updating the dependencies in project.clj apparently does require restarting the leiningen processes. In my case, that means rerunning lein ring server-headless and restarting the cider connection in Emacs.

Additionally, if you’ve not used cider for a while you may still have its plugin in your ~/.lein/profiles.clj file. This is not necessary anymore.

find $HOME -maxdepth 3 -type f -atime -7 \( -name '*.txt' -or -name '*.md' \)

Find .txt and .md files that were accessed within the past week.

The -atime -7 controls the time here. It can also be used to find files accessed more than a week ago (-atime +7), or exactly a week ago -atime 7.

The other interesting part are the parentheses, which are used to group the -name options together, so that -or works only on those two. If the parentheses were not used, -or would have either matched everything before it or everything after it.

-maxdepth 3 is used so that the search space is small enough. With it enabled, the command completes almost instantaneously:

0.07s user 0.06s system 98% cpu 0.139 total

Whereas -maxdepth 4 is already much slower.

0.31s user 0.97s system 37% cpu 3.403 total

This works on my system because my notes are in relatively high-up directories.

Update:

Instead of the command above, I now use one that sorts the files by access time and returns all files it finds, not only the more recent ones:

ls -1t $(find $HOME -maxdepth 2 -type f \( -name '*.txt' -or -name '*.md' \) \! -path "$HOME/.*")

This runs as a cronjob and its output is redirected into $HOME/.recent.txt, where it is then read by Emacs.

Introduction

blog is a tiny tool that generates your (link)blog. It takes a YAML file as input, and produces a single HTML file on stdout, which you could then upload to your server, place on your Desktop, or pass along to friends.

blog is not meant to be a feature-rich program. It does the bare minimum necessary to host a blog with different post types, and not more. Whichever additional features you need you can add to your version of it.

How to use it

All posts are written in a single file blog.yaml, which contains a list of entries.

The most basic post type is text, written as follows:

- title: An example post
  content: You can use *Markdown* here...

Optionally you can specify a date field.

If content starts with a non-alphabetic character, you need to start the value with a vertical bar |:

- title: Special characters...
  content: |
    *This* post starts with a special character.

There are a few other types of posts:

With the exception of the shell type, title and content are optional.

pacman -Qo $(ls -1t --time=atime /usr/bin | tail -n30)

Find infrequently used binaries on your system. ls --time=atime is the key here, it uses the access time instead of the modification time, which is the default.

All tags: