Posts tagged "computing":

07 Jan 2020

Switching website to GitLab Pages

Previously I detailed how I set up blog.winny.tech using GitHub for source code hosting and Caddy’s git plugin for deployment. This works well and I used a similar setup with my homepage. The downside is I host the static web content and I am tied to using Caddy.1 I imagine simpler is better, so I opted to host my static sites — https://winny.tech/ & https://blog.winny.tech/ — with GitLab pages.

What’s wrong with Caddy?

Caddy is very easy to get started with, but it has its own set of trade-offs. Over the last few years, I’ve noticed multiple hard-to-isolate performance quirks, some of which were likely related to the official Docker image. In particular, I had built a Docker image of Caddy with webdav support, and the overall performance tanked to seconds per request, even with webdav disabled. I still have no clue what happened there; instrumenting Caddy through Docker appeared nontrivial, so I gave up on webdav support, reverted to my old Docker based setup, and everything was fast, once again.

There is a good amount of inflexibility in Caddy, such as the git plugin’s limitation to deploy to a non-root folder of the web root. And its rewrite logic is usually what you want, but not nearly as flexible as nginx’s.

Asking questions on their IRC is usually met with no response of any kind, which indicates to me that the project’s community isn’t very active.

The move to Caddy v2 is unwelcoming; I don’t want to relearn yet another set of config files and quirks, especially weeding through the layer of configuration file format adapters and the abstracted-away configuration options, so I rather just use Certbot and some other HTTPD that won’t change everything for the fun of it.2

Until recently Caddy experimented with a pretty dubious monetizing strategy. HackerNoon published an article detailing how it worked. In short: they plastered text all over their website claiming you “need to buy a license” to use Caddy commercially, though that claim was never true. Caddy was always covered by Apache License 2.0. Instead, you needed a commercial license in the narrow use-case that your organization wants to use Caddy’s prebuilt release binaries as offered on their website. It is good they stopped this scheme, but it leaves a bad taste with the community, and with me, and discourages me from relying on the project moving forward.

Why GitLab Pages instead of GitHub Pages?

I have used both GitHub Pages and GitLab Pages in the past. My experience with GitHub Pages is it’s relatively inflexible and difficult to see what is going to be published, and has a CI/CD setup only useful for certain Jekyll based sites. GitLab pages, on the other hand, lets you set up any old Docker-based CI/CD workflow, so it is possible to render a blog with GitLab CI of any static site generating software. The IEEE-CS student chapter I am a part of does just this. We use a combination of static redirect sites and a Pelican-powered static website. There are a large number of example repositories for most of the popular ways to publish a static website, including Gatsby, Hugo, and Sphinx. Needless to say GitLab Pages puts GitHub pages to shame in terms of flexibility.

Setting up GitLab Pages

There are two steps in setting up GitLab Pages. These are the most important ideas related to GitLab pages; how to navigate the site is something the reader must experience for oneself. Nothing beats experimentation and reading the docs. Make sure to refer to the official GitLab Pages documentation for further details.

1) Getting GitLab Pages deploy your git repository

Before getting started, make sure GitLab Pages is activated for your project. Visit it via Settings → Pages on your project. Most of the Pages settings are rooted in that webpage.

How GitLab Pages CI/CD deploys your site is specific to your software or lack of software. If you are simply setting up a static website on GitLab Pages, a simple .gitlab-ci.yml will work for you:

pages:
  stage: deploy
  script:
  - mkdir .public
  - cp -rv -- * .public/  # Note the `--'
  - mv .public public
  artifacts:
    paths:
    - public
  only:
  - master

This simply tells GitLab CI/CD to copy everything not starting with a . into the public folder. By the way, one cannot change the public folder path. It does not appear possible to use something like artifacts: paths: ["."] to deploy the entire git repository.

There is a GitLab CI/CD YAML lint website3 (and web API). Additionally, there is a reference documentation for the .gitlab-ci.yml schema. Please note, it will often yield confusing error messages. For example it is invalid to omit a script key, but the error message is Error: root config contains unknown keys: pages. Take the error messages with a grain of salt.

Once you have what seems like the .gitlab-ci.yml that you want, commit it to your git repository, and push to GitLab. Check progress under CI/CD → Pipelines. If everything works out, you should be able to view the website on GitLab Page’s website — e.g. https://winny.tech.gitlab.io/blog.winny.tech. The format of the above url (visible in Settings → Pages) is https://<namespace>.gitlab.io/<project>. If you can’t view your website, check the CI/CD pipeline’s logs, and inspect the artifacts ZIP — which is also available from the CI/CD piplines page. Chances are you need to edit the .gitlab-ci.yml or tweak the scripts used in the YAML file.

2) Hosting the GitLab Pages site on your (sub-)domain

All the tasks in this section use Settings → Pages using the “New Domain” or “Edit” webpages.

To set up GitLab pages on your domain, you need to first prove ownership of that specific domain via a specially constructed TXT record, then configure that specific domain to point to GitLab Pages via a CNAME or A record. In general I recommend using an A record because you can stuff any other records you please on the same domain.

Simply add an A record on your DNS setup as so: yourdomain.com. A 35.185.44.232.4 If everything works, after the DNS updates it can take anywhere from seconds to the rest of your SOA TTL (Time-To-Live). Visiting your domain should now provide a GitLab Pages placeholder page with a 4xx error code.

Next prove to GitLab you own the domain. Create the TXT record as indicated in the GitLab Pages management website. The string to the left of TXT should be the name/subdomain, and the string to the right of TXT is the value. Alternately you can put the entire string into the value field of a TXT record (?!).

Note, the above two sub-steps are independent; one can validate the domain before adding the record to point it to GitLab, and vice versa.

GitLab Pages Gotchas

There are a few gotchas about GitLab Pages. Some of them are related to GitLab Pages users not being familiar with all of the DNS RFCs. Others are simply because GitLab Pages has quirks too.

CNAME on apex domain is a no-no

Make sure you do not use a CNAME record on the apex domain. Use an A record instead. Paraphrasing from the ServerFault answer: RFC 2181 clarifies a CNAME record can only coexist with records of types SIG, NXT, and KEY RR. Every apex domain must contain NS and SOA records, hence a CNAME on the apex domain will break things.

CNAME and TXT cannot co-exist

The above also is true for TXT and CNAME on the same subdomain. For example if one adds TXT somevaluehere and CNAME example.com to the same domain, say hello.example.com, things will not behave correctly.

If we have a look at the GitLab Pages admin page, the language is mildly confusing, stating “To verify ownership of your domain, add the above key to a TXT record within to your DNS configuration.” At first, I thought “somewhere in your configuration” means “place this entire string as the right hand side of a TXT record on any subdomain in your configuration”. This does work, as such I have

blog.winny.tech. IN A   35.185.44.232
blog.winny.tech. IN TXT "_gitlab-pages-verification-code.blog.winny.tech TXT gitlab-pages-verification-code=99da5843ab3eabe1288b3f8b3c3d8872"

But they probably didn’t mean that, Surely I should have this instead:

blog.winny.tech IN A   35.185.44.232
_gitlab-pages-verification-code.blog.winny.tech IN TXT gitlab-pages-verification-code=99da5843ab3eabe1288b3f8b3c3d8872

I feel a bit silly after realizing this is what the GitLab Pages folks intended for me to do, but it really was not clear to me, especially given how when clicking in the TXT record’s text-box it highlights the entire string, instead of allowing the user to copy the important bits (such as the TXT’s key) into whatever web management UI they might be using for DNS.

The feedback loop for activation of the domain is slow

It can take awhile for a domain to be activated by GitLab Pages after the initial deploy. Things to look for: you should get a GitLab Pages error page on your domain if you set up the CNAME or A record correctly. The error is usually “Unauthorized (401)”, but it can be other errors.

The other place to look is verify your domain is in the “Verified” state on the GitLab Pages admin website.

The feedback loop for activation of LetsEncrypt HTTPS is huge

Sometimes GitLab pages will seemingly never activate your LetsEncrypt support for HTTPS access. If this happens, a discussion suggests the best solution is to remove that domain from your GitLab Pages setup, and add it again. You will likely have to edit the TXT record used to claim domain ownership. This also worked for me, when experiencing the same issue.

Make sure to enable GitLab Pages for all users

Conclusion

GitLab pages isn’t perfect, but this should streamline what services my VPS hosts, and give me more freedom to fiddle with my VPS configuration and deployment. I look forward to rebuilding my VPS with cdist, ansible, or saltstack. While that happens, my website will be up thanks to GitLab pages. Also, I imagine GitLab Pages is a bit more resilient to downtime than a budget VPS provider.

The repositories with .gitlab-ci.yml files for both this site, and winny.tech are public on GitLab official hosting. Presently it is the simplest setup possible, simply deploying pre-generated content already checked into git, but the possibilities are endless.

Footnotes:

1

I could deploy my own webhook application server that GitHub/GitLab connects to, and have done so in the past, but every application I manage is another thing I have to well, ahem, manage (and fix bugs for).

2

There are some cool new features in Caddy 2, such as the ability to configure Caddy via a RESTful API and a sub-command driven CLI, but I don’t need additional features.

3

From the GitLab CI Linter’s old page “go to ‘CI/CD → Pipelines’ inside your project, and click on the ‘CI Lint’ button”. Or simply visit https://gitlab.com/username/project/-/ci/lint.

4

It’s a good idea to compare the mentioned IP address against what appears in the GitLab Pages Custom Domain management interface.

Tags: computing operations
25 Dec 2019

How to fix early framebuffer problems, or "Can I type my disk password yet??"

Most of my workstations & laptops require a passphrase typed in to open the encrypted root filesystem. So my steps to booting are as follows:

  1. Power on machine
  2. Wait for FDE passphrase prompt
  3. Type in FDE passphrase
  4. Wait for boot to complete and automatic XFCE session to start

Since I need to know when the computer is ready to accept the passphrase, it is important the framebuffer is usable during the early part of the boot. In the case of of HP Elitebook 820 G4, the EFI framebuffer does not appear to work, and I rather not boot in BIOS mode to get a functional VESA framebuffer. Making things more awkward, a firmware is needed when the i915 driver is loaded, or the framebuffer will not work either. (It’s not always clear if a firmware is needed, so one should run dmesg | grep -F firmware and check if firmware is being loaded.)

With this information, the problem is summarized to: “How do I ensure i915 is available at boot with the appropriate firmware?”. This question can be easily generalized to any framebuffer driver, as the steps are more-or-less the same.

Zeroth step: Do you need only a driver, or a driver with firmware?

IT is a good idea to verify if your kernel is missing a driver at boot, or is missing firmware or both. Boot up a Live USB with good hardware compatibility, such as GRML1 or Ubuntu’s, and let’s see what framebuffer driver our host is trying to use2:

$ dmesg | grep -i 'frame.*buffer'
[    4.790570] efifb: framebuffer at 0xe0000000, using 8128k, total 8128k
[    4.790611] fb0: EFI VGA frame buffer device
[    4.820637] Console: switching to colour frame buffer device 240x67
[    6.643895] i915 0000:00:02.0: fb1: i915drmfb frame buffer device

Se we can see the efifb is initially used for a couple seconds, then i915 is used for the rest of the computer’s uptime. Now let’s look at if firmware is necessary, first checking if modinfo(8) knows of any firmware:

$ modinfo i915 -F firmware
i915/bxt_dmc_ver1_07.bin
i915/skl_dmc_ver1_27.bin
i915/kbl_dmc_ver1_04.bin
... SNIP ...
i915/kbl_guc_33.0.0.bin
i915/icl_huc_ver8_4_3238.bin
i915/icl_guc_33.0.0.bin

This indicates this driver will load firmware when available, and if necessary for the particular mode of operation or hardware.

Now let’s look at dmesg to see if any firmware is loaded:

[    0.222906] Spectre V2 : Enabling Restricted Speculation for firmware calls
[    5.511731] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[   25.579703] iwlwifi 0000:02:00.0: loaded firmware version 36.77d01142.0 op_mode iwlmvm
[   25.612759] Bluetooth: hci0: Minimum firmware build 1 week 10 2014
[   25.620251] Bluetooth: hci0: Found device firmware: intel/ibt-12-16.sfi
[   25.712793] iwlwifi 0000:02:00.0: Allocated 0x00400000 bytes for firmware monitor.
[   27.042080] Bluetooth: hci0: Waiting for firmware download to complete

Aha! So it appears we need i915/kbl_dmc_ver1_04.bin for i915. In the case case one doesn’t need firmware, it won’t show anything related to drm or a line with your driver name in it.

By the way, it is a good idea to check dmesg for hints about missing firmware, or alternative drivers, for example my trackpad is supported by both i2c and synaptics based trackpad drivers, and the kernel was kind enough to tell me.

First step: Obtain the firmware

On Gentoo install sys-kernel/linux-firmware. You will have to agree to some non-free licenses; nothing too inane, but worth mentioning. Now just run emerge -av sys-kernel/linux-firmware. (On other distros it might be this easy, or more difficult; for example—in my experience Debian does not ship every single firmware like Gentoo does, so YMMV.)

Second step, Option A: Compile firmware into your kernel

Since most of my systems run Gentoo, it is business as usual to deploy a kernel with most excess drivers disabled except for common hot-swappable components such as USB network interfaces, audio devices, and so on. For example, this laptop’s config was originally derived from genkernel’ stock amd64 config with most extra drivers disabled, then augmented with support for an Acer ES1-111M-C7DE, and finally with support for this Elitebook.

I had compiled the kernel with i915 support built into the image, as opposed to an additional kernel module. Unfortunately this meant the kernel is unable to load firmware from filesystem, because it appears only kernel modules can load firmware from filesystem. To work around this without resorting to making i915 a kernel module, we can include the drivers within the kernel image (vmlinuz). Including firmware and drivers both in the vmlinuz has a couple benefits. First it will always be available. There is no need to figure out how to load the driver and firmware from initrd, let alone getting the initrd generator one is using, to cooperate. A downside is it makes the kernel very specific to the machine, because perhaps a different Intel machine needs a different firmware file compiled in.

To achieve including the firmware in kernel, I set the following values in my kernel config (.config in your kernel source tree).

CONFIG_EXTRA_FIRMWARE="i915/kbl_dmc_ver1_04.bin"
CONFIG_EXTRA_FIRMWARE_DIR="/lib/firmware"

Note, if you’re using menuconfig, you can type /EXTRA_FIRMWARE (slash for search, then the text) followed by keyboard return to find where these settings exist in the menu system.

Then I verified i915 is indeed not a kernel module, but built into the kernel image (it would be m if it’s a module):

CONFIG_DRM_I915=y

After compiling & installing the kernel (and generating a dracut initrd for cryptsetup/lvm), I was able to reboot and get an early pre-mounted-root framebuffer on this device.

Second step, Option B: A portable kernel approach (using sys-kernel/vanilla-kernel)

I discovered the Gentoo devs have begun shipping an ebuild that builds and installs a kernel with a portable, livecd friendly config. In addition this package will optionally generates an initrd with dracut as a pkgpostinst step, making it very suitable as a replacement for users who just want a working kernel, and don’t mind a excessive compatibility (at a cost to size and build time).

This presents a different challenge, because while this package does allow the user to drop in their own .config, it is not very multiple-machine-deployment friendly to hard-code each individual firmware into the kernel. Instead we tell dracut to include our framebuffer driver. As mentioned above I found this computer uses the i915 kernel driver for framebuffer. Let’s tell dracut to include the driver:

cat > /etc/dracut.conf.d/i915.conf <<EOF
add_drivers+=" i915 "
EOF

Dracut is smart enough to pick up the firmware the kernel module needs, provided they are installed. To get an idea what firmware dracut will include, run modinfo i915 -F firmware which will print out a bunch of firmware relative paths.

After applying this fix, just regenerate your initrd using dracut; in my case I let portage do the work: emerge -1av sys-kernel/vanilla-kernel. Finally reboot.

Conclusion

Check dmesg. Always check dmesg. We found two ways to deploy firmware, in-kernel and in-initrd. The in-kernel technique is best for a device-specific kernel, the in-initrd is best for a portable kernel. I am a big fan of the second technique because it scales well to many machines.

I did not touch on the political side of using binary blobs. It would be nice to not use any non-free software, but I rather have a working system with a couple small non-free components, than a non-working system. Which is more valuable, your freedom, or reduced capacity of your tools?

Footnotes:

1

GRML is my favorite live media. It is simple, to the point, has lots of little scripts to streamline tasks such as setting up a wireless AP, a iPXE netboot environment, a router, installing debian, and so on. And Remastering is relatively straight forward. It also has a sane gui sutable for any machine (fluxbox).

2

Thanks to this post on Ask Ubuntu

Tags: gentoo linux computing
02 Aug 2019

The Danger of fuzzy matching over one's PATH

Awhile back I noticed my personal mnt/ directory, my (empty) personal tmp/ directory, and a few symbolic links disappeared from my home directory. I only noticed because I use unison1 to synchronize my desktop and laptop homedirs. The actual amount of removed directories and symbolic links were staggering, and it costed me five minutes of extra effort to search through the unison UI to ignore files I don’t want to synchronize. Repeat this a few times a day, with the problem occurring at seemingly random intervals, and you’ve wasted minutes out of every day, which adds up to hours every month.

For months I had not figured out what the problem was. By chance I had noticed while using my application launcher, I had accidentally not ran links -g 2 but instead had ran cleanlinks. I wonder to myself what was I running by accident, as I had done this before, but had not thought anything of it, assuming it was a program that would print usage or perform a no-operation by default.

I was wrong.

Turns out cleanlinks searches the current working directory for empty directories and broken symbolic links. Both are useful. For example I keep empty directories in ~/mnt/ to mount sshfs stuff, and I prefer to use ~/tmp/ as a work directory because no system scripts will touch it.3 I had a few broken symbolic links scattered about, from weird git repositories working trees to some stale user-level systemd unit links from my archlinux install.

Making things more interesting, if you run cleanlinks --help, or with any flags, it operates as usual. So it’s a mistake to also do cleanlinks /some/directory/i/want/to/clean. As a part of imake,4 the old X11 ecosystem build tools, cleanlinks will be installed on many systems and it’s not safe to run it lest you enjoy random stuff being messed about with in your current directory.

How did I manage to run cleanlinks so many times? I did not have links installed on the affected machine. And even after I did install it, I forgot to remove cleanlinks from my rofi runcache. So it had a higher precedence to match than links in certain cases. Hence I ran it a few times on accident even after installing links.

Therefore, I strongly recommend one doesn’t fuzzy match over their PATH. Who knows what other nasty tools ship on your system that will lay waste your productivity, or worse, damage your personal files.

Regardless, I have yet to heed my own warning. Maybe I should just use .desktop files, but then again, maybe there exists a cleanlinks.desktop… Ideally, I’ll create a directory of symlinks to programs I want to launch from rofi. Someday :)

About Unison

I should mention unison is a superb tool for synchronizing your data. It shows the user a list of changes to each directory being synchronized, waits for the user to decide which way each file should be synchronized:

  1. Send file from host A to B
  2. Send file from host B to A
  3. Ignore the file this time
  4. Ignore the file permanently
  5. Merge the files

Because unison doesn’t try to be fancy or automatic, it is easy to understand what is happening.

Footnotes:

2

Links 2 is the best web 1.0 browser. It even shows images and different text sizes. Screenshots on this page.

3

/var/tmp/ could also work, but this way I know nobody is gunna mess with my files and I won’t accidentally mess up permissions on sensitive data.

4

imake on freedesktop’s GitLab. See also what packages depend on imake in Arch Linux. I use Gentoo across my laptop and workstation, so it’s necessary to have imake installed.

Tags: computing rant
28 Jul 2019

Open URL in existing Qutebrowser from Emacs Daemon on Gentoo

On my Gentoo desktops, I use Emacs Daemon via sys-emacs/emacs-daemon1 to ensure an Emacs instance is ready to go and always available from boot. This is done via creating a symbolic link like /etc/init.d/emacs.winston to /etc/init.d/emacs which will start Emacs for the given user. See the package README for more details.

A shortcoming of this setup is XDG_RUNTIME_DIR2 is not set, as this is set by my Desktop Session - maybe LightDM or consolekit set this? As a result, when I open a URL from Emacs Daemon, it opens a fresh qutebrowser session, loading the saved default session, and making a mess of my workflow.

One approach to fix this might be to instead run Emacs daemon from my .xsession script, but I rather not supervise daemons at the user level; if I were to consider this, I'd be better off to switch to systemd for user-level services anyway.

The solution I came up with is to add some lines to my init.el to ensure XDG_RUNTIME_DIR is set to the expected value:

(defun winny/ensure-XDG_RUNTIME_DIR ()
  "Ensure XDG_RUNTIME_DIR is set. Used by qutebrowser and other utilities."
  (let ((rd (getenv "XDG_RUNTIME_DIR")))
    (when (or (not rd) (string-empty-p rd))
      (setenv "XDG_RUNTIME_DIR" (format "/run/user/%d" (user-uid))))))

(add-hook 'after-init-hook #'winny/ensure-XDG_RUNTIME_DIR)

A strange emacs-ism: (user-uid) returns float or integer, despite the backing uid_t (on *nix) is guarenteed to be an integer type. I'll just assume this'll never return a float. Please contact me otherwise, I'd love to hear about this.

Footnotes:

Tags: emacs productivity computing gentoo qutebrowser
22 Jan 2019

Blink Shell: First Thoughts

As a heavy user of SSH to manage computers and IRC via command line clients, the most used application on my phone besides the web browser is a SSH client. Previously I have used Prompt and it worked, but barely. My issues with Prompt include crashing on emoji spam that is common in certain IRC channels, very slow terminal rendering to the point that watching the output of compiling a large package will cause Prompt to lag uncontrollably for tens of seconds, and a relatively un-intuitive UI. Please note I am referring to Prompt, not Prompt 21, which is another problem in itself (the idea I need to pay for the same product but rewritten to actually work is just ludicrous).

blink-thumb.png

Another thing to note before I dig into Blink is I'm not a big fan of mobile devices. They get the job done, but in most cases I rather crack open my netbook and not fight with the learn-by-fiddling mobile UI paradigms most devices have adopted. I am also not a fan of Apple's ecosystem, but I'm pretty content with my iPhone and iPad for my basic use cases: browse the web, chat applications, making phone calls, casual mobile gaming, logging into other computers via SSH.

Enter Blink2. This application does a few things differently from other mobile SSH clients. It offers a command-line first user interface. At first I thought this would be painful, but the application's verbs are pretty simple and appear to match up with how one uses ssh and mosh in a terminal emulator. The terminal emulator is responsive, significantly more stable than Prompt, and looks good. I can even load up my favorite font: Droid Sans Mono3 (thanks to its support for loading webfonts via a URL to a CSS file).

Another thing Blink does different is it offers Mosh support4. Mosh is an alternative to SSH that can "roam" across networks and device power state changes. One thing to note is Mosh requires SSH to make the initial login, then switches to Mosh which uses a different protocol. When using mosh, I tend to log in once throughout the day. I can then switch networks and mosh will gracefully reconnect as soon as the server is reachable again. In addition, when I sleep my netbook, the session isn't lost (but don't go using this instead of tmux or screen). Instead when I wake my netbook, it also gracefully reconnects to the server. The same is true of Blink's Mosh support. It even appears to save state between changing iOS application states, sans forcing the application to quit (or rebooting your device). This means I only log in once on my phone throughout the day, and the connection is gracefully dropped when the app becomes inactive and reconnects when the app is opened again.

I did notice I crashed Blink after writing non-ascii binary data to the terminal followed by some other operations, but did not manage to reproduce the crash yet. If I do, I am confident I can report the bug and get meaningful feedback too, as Blink has a bug tracker on GitHub5. The code is available under GPLv3 (but not to worry, as the single copyright holder can safely relicense it to deploy on the iOS App Store6). I could probably build it myself and install that, but I neither own a Mac (or wish to install Mac OS X in a VM) nor want to ever open XCode again even if I had a way to. That life isn't for me. Again, I'm very practical about how I use my iOS devices and don't enjoy using them more than I have to, so I'm willing to pay $20 for an application I'll use every day of my life on this walled garden platform. An added silver lining is this investment goes towards the development of such a great app that appears to have active maintainership.

Note: I did not explore some of Blink's features such as using it a local iOS shell or settings synchronization. YMMV.

Getting started

Here are the steps I used to get started with Blink, as it was relatively unclear where to find documentation and which settings are necessary to configure. Usage information can be found via the help command, the README on GitHub7, and fiddling with the UI.

  1. Run the config command.
    1. Default User -> set to your preferred username on most of your servers
    2. Add a new font via: Appearance -> Add a new font. Open the gallery and grab the raw CSS file (you can get a raw URL after switching to GitHub desktop), then paste it into the URL Address Field. I prefer Droid Sans Mono for its readability at all sizes.
    3. Keys -> + -> Create New

      Please use ssh keypair authentication. Password authentication requires you to memorize a password to log in which can be bruteforced or otherwise leaked when typing it in the wrong context. Also, be sure to use a unique key on each of your devices to make revocation easier when a device is no longer used.

      Note: it appears keys are always stored as plaintext, which is acceptable for your uses, but it appears ssh-keygen can create a key with a passphrase. I wasn't able to get Blink to work with Passphrase protected keys, but I didn't try very hard. YMMV.

      • Type: RSA
      • Bits: 4096 (why not?)
      • Name: id_rsa
    4. Hosts -> +
      • Host: the name you gave your host when running ssh host or mosh host. This is not related to the server's hostname, though it can be the same. I prefer simple names, usually the hostname before the first dot (e.g. worf.winny.tech becomes simply worf)
      • User: make sure this is correct
      • Key: select a key to use.
  2. Now you need to install the SSH public key:
    1. Run config again
    2. Keys -> id_rsa -> Copy Public Key
    3. Install it some way.

      I usually prefer to visit https://ptpb.pw/f and paste the public key, hit Paste, then copy the url with the text of "created". You can then curl https://ptpb.pw/PasteId on the server via another login setup (or ssh from a set up machine) and add it to your ~/.ssh/authorized_keys. Make sure that file is mode 0600 (chmod 0600 ~/.ssh/authorized_keys). Also make sure the .ssh directory is 0700 (chmod 0700 ~/.ssh). OpenSSH refuses to use authorized_keys if these permissions are world readable.

  3. Run ssh worf. Congratuations, you now have a SSH session on the best iOS client available at this time.
  4. To use mosh, ensure mosh is installed on the server.8 Then run mosh worf.

Footnotes:

Tags: computing
13 Jan 2019

GNU C Style

No. Do not use it please! There are far easier-to-read and easier-to-use styles for C!1

gnu-comic.png

Footnotes:

Tags: computing rant
11 Jan 2019

Publishing with org-static-blog

Criteria

After reviewing a list of org-mode1 capable static website generators2, I decided to see if org-static-blog3 could suffice my simple needs. My criteria for choosing an org-mode static site generator was:

  • it must be actively maintained,
  • it must be simple to set up with customizations,
  • and it must work with Emacs 26 and later.

This ruled out quite a few right away. I didn't attempt using org-publish, as it looked like a great deal of configuration to achieve a minimum viable web-page for this project.

Configuration of org-static-blog

Following the org-static-blog README documentation, it is very straight forward to get a minimal viable website generated. I added the following to my init.el:

(add-to-list 'auto-mode-alist (cons (concat org-static-blog-posts-directory ".*\\.org\\'") 'org-static-blog-mode))
(setq org-static-blog-publish-title "blog.winny.tech")
(setq org-static-blog-publish-url "https://blog.winny.tech/")
(setq org-static-blog-publish-directory "~/projects/blog/")
(setq org-static-blog-posts-directory "~/projects/blog/posts/")
(setq org-static-blog-drafts-directory "~/projects/blog/drafts/")
(setq org-static-blog-enable-tags t)

(setq org-static-blog-page-header
      "
<link href=\"static/style.css\" rel=\"stylesheet\" type=\"text/css\" />
")

I opted to setq all the configuration variables; I will likely switch to using M-x customize-group RET org-static-blog RET in the future however. It's a better experience.

Then I simply create a new post with M-x org-static-blog-create-new-post RET <TITLE> RET, edit the buffer, save it, then run M-x org-static-blog-publish RET. I also added some styling to my style.css4 based off the Tachyons CSS framework5. I had previously used Bootstrap6 for styling but was hoping to avoid adding framework and other extra tooling that shouldn't be necessary to generate a simple site like this one.

Deploying with Caddy and GitHub

I am a fan of the Caddy7 web server which offers automatic HTTPS via LetsEncrypt with only a few lines of configuration. In addition Caddy has a plugin named git which offers the ability to automatically deploy content from git repositories with webhook support. To deploy the following steps are taken:

  1. M-x org-static-blog-publish RET from Emacs to regenerate the static site,
  2. commit the changes in git,
  3. and finally push the git branch to GitHub.

After these steps, GitHub automatically sends a HTTP POST request to my Caddy server with information about the new git commits and Caddy pulls the git repository. If everything went well and the webhook successfully fired, the website is now deployed.

Server Configuration

I switched most of my personal internet-related services to Docker8 in conjunction with docker-compose9 last year. The main rationale is I can move my configuration without dealing with systems package versions. I already had Caddy set up, so it was as simple as adding this to my Caddyfile:

blog.winny.tech {
        root /srv/www/blog.winny.tech
        gzip
        log /logs/blog.winny.tech.log
        git https://github.com/winny-/blog.winny.tech {
                hook /webhook top-secret-password-redacted
        }
}

The relevant lines of my docker-compose.yml looks like this:

version: "2.1"
services:
  web:
    image: abiosoft/caddy:no-stats
    ports:  # Expose the webserver ports to the internet
      - "80:80"
      - "443:443"
    environment:
      # This is where caddy places certs after ACME negotiation.
      CADDYPATH: "/etc/caddycerts"
      ACME_AGREE: "true"
    volumes:
      - /srv/caddy/certs:/etc/caddycerts
      - /srv/caddy/Caddyfile:/etc/Caddyfile  # Configuration
      - /srv/www:/srv/www  # the websites
      - /srv/caddy/logs:/logs

A keen docker-compose-savy reader will notice I did not specify a restart: always entry. I had Caddy configured to always restart, however, when requesting new HTTPS certificates from LetsEncrypt, there is a tendency to misconfigure the domain configuration or Caddyfile, and if Caddy requests too many HTTPS certificates in a short amount of time, LetsEcrypt will rate-limit my future requests. Usually this is only requires an hour or two of waiting, but is frustrating to deal with when trying to fix my configuration. Instead I rather Caddy exit after failing to activate all the domains and fix my configuration first.

GitHub Configuration

Simply create the GitHub repository, then add a webhook. It is important to note the webhook must send a JSON payload. By default a newly created webhook will send a application/x-www-form-urlencoded payload and will not work.

Conclusion

With this simple setup I can write posts. I can easily move the configuration to a new host at will. In addition my setup does not depend on future use of GitHub as GitLab, Gogs, and other git hosts offer webhook support in the same way. Most importantly, I can author org-mode files and have a better balance between features and ease of use than what markdown offers.

Web dev is one of my least favorite programming exercises. Between all the testing necessary to ensure a simple site works across many platforms, the trend to use a very complex system such as webpack and many other tools to produce simple websites, and the perpetual flux-and-flow between vendors only partially implementing good features, web dev just doesn't do it for me. Hence, I am very pleased how simple this project turned out to be.

Footnotes:

Tags: computing
09 Jan 2019

Toggle Redshift with Keyboard Shortcut

Redshift is a screen-tinting program that achieves similar goals to the popular f.lux1 program.

I perused through the redshift man-pages and noticed there is no documented way to toggle redshift. Of course one can click the notification area icon when using redshift-gtk or SIGTERM the redshift process, but neither is very user friendly. (The mouse is not user friendly.) After some awkward DuckDuckGo-ing and Googling I found an obvious solution on the redshift homepage2: simply send SIGUSR1 to the redshift or redshift-gtk process. When using redshift-gtk, one can choose to send SIGUSR1 to either redshift or redshift-gtk.

This is the script I came up with:

#!/bin/sh
set -eu
if ! pkill -x -SIGUSR1 redshift; then
    echo 'Could not find redshift process to toggle.' >&2
    exit 1
fi

After installing the script into my system's PATH, now all I have to do is add a line to my Xbindkeys3 configuration file (~/.xbindkeysrc.scm) such as:

(xbindkey '(Mod4 F2) "toggle-redshift")

Now I can type Mod4-F2 and toggle Redshift.

Footnotes:

Tags: productivity computing
Other posts

© Winston Weinert (winny) — CC-BY-SA-4.0