GNOME Shell extensions can get disabled any time for various reasons, so it's essential to properly clean up the entire extension state when an extension gets disabled. GNOME Shell doesn't provide a lot of infrastructure for this purpose, tho, so let's roll our own pattern for properly destroying a GNOME Shell extension in Typescript.
We have a few Java libraries and SOAP clients around, based on jaxb, jaxws, and other such relicts. Sometimes we don’t touch these for years, and yet they still run on contemporary JDKs with only moderate difficulties.
We have a .NET/Winforms based desktop application from about 15 years ago, which still runs on contemporary .NET 4 with only marginal changes.
In our Angular app which is but three years old we already went through two major deprecations and migrations which took us two months each, and where we had to touch every single part of the code base.
The older I get the more I learn to appreciate Java.
The ecosystem for GNOME Shell has come a long way in the last few years. We now have a comprehensive guide for extension developers and good API docs for the underlying native libraries. The API documentation in GNOME Shell itself is still lacking, but meanwhile its Javascript source code is a surprisingly good and readable reference.
With GNOME 45 the shell took another big step: It finally uses ES modules now instead of the legacy import syntax of GJS. While this causes major breakage for all extensions, requiring every single extension to be ported to the ES modules, it finally enables mostly seamless integration with standard Javascript tooling which is increasingly build around ES modules these days.
Together with another recent tool this means we finally have Typescript for shell extensions!
to get a debug log of systemd-resolved trying to resolve a specific domain.
This is backed by dbus: If a service listens on dbus and has its bus name defined in its unit file then it can expose the log control interface on its bus connection to let systemctl change its log level and log target.
All of systemd's own services support this interface, but unfortunately it hasn't seen wide-spread adoption outside systemd yet. Which is kinda sad, because it's really a great feature for debugging.
I certainly plan to use it more, so I put up logcontrol.rs on crates.io.
I'm a software engineer and a system engineer for satellite mission planning. I do Scala, Typescript, Python, and Rust, and all in Gnome on Arch Linux. I also do bouldering, cycling, and the occasional video game. And I read a lot.
Update: I no longer use dracut, and the corresponding part of this blog
post no longer reflects my setup.
This article describes my Arch Linux setup which combines Secure Boot with custom keys, TPM2-based full disk encryption and systemd-homed into a fully encrypted and authenticated, yet convenient Linux system.
Historically cryptsetup and LUKS only supported good old passwords; however recent systemd versions extend cryptsetup with additional key types such as FIDO tokens and TPM devices.
I like the idea of encrypting the rootfs with a TPM2 key; it allows booting without ugly LUKS password prompts but still it keeps data encrypted at rest, and when combined with secure boot also still protects the running system against unauthorized access.
Secure boot will prevent others from placing custom kernels on the unencrypted EFI system partition and booting these, or changing the kernel cmdline, in order to obtain root access to the unlocked rootfs. LUKS encryption with a TPM-based key bound to secure boot state protects the data if someone removes the hard disk and attempts to access it offline, or tries to disable secure boot in order to boot a custom kernel.
A node hosts a Gitlab runner and a small k3s cluster which runs a few services as regular kubernetes deployments. A CI job pinned to that runner builds Docker images for these services services, updates the image of the corresponding deployments, and starts a few system and acceptance tests. The CI job does not push those images to the in-house registry; to avoid polluting the registry with hundreds of images it just builds locally.
Each test then scales each deployment to zero replicas to effectively stop all services, clears the system’s underlying database, and scales the service deployments back to a small number of replicas sufficient for testing.
The whole thing runs fine until one day the replicas randomly fail to start.