Using an old Supermicro IPMI to configure broken networking

Published: Monday, Aug 9, 2021
Last modified: Monday, Aug 9, 2021 (34f8d54)

The goal of this post is to demonstrate the usefulness of IPMI even in hobbyist or personal use. Anything that means less touching physical machines to power cycle them, or fix network misconfigurations, can save a lot of time.

I had broken my NAS’s networking by adding a bridge and attaching the existing ethernet device to it. I forgot to configure the ethernet device to not try to fetch an IP address (via DHCP), but instead only fetch an IP address on the bridge itself. This had the surprising effect of both the bridge (br0) and ethernet (eth0) being given the same IP, as they both had the same mac address (as the bridge was talking to my LAN over the eth0 slave device). Woops, stuff happens.

This NAS does not have HDMI, DVI, or DisplayPort. I do have a VGA monitor, but it was packed up in storage. First thought was I could use a serial console, but it appears I hadn’t configured the serial console correctly, as I got a blank terminal. Then I remembered, this thing has IPMI with full VGA console access. There is a nasty catch, which is what this post is about.

For anyone curious, here’s the specs for my NAS. It’s not very exciting, software configuration barely works, but is reliable. A TODO is to ansible-ize the configuration. The motherboard, cpu were purchased used off ebay, it was a good decision, as it was reasonably inexpensive.

Anyway back to solving that pesky networking issue…

Using an Ubuntu VM

After installing Ubuntu 20.04 in VirtualBox, I set up the guest additions (e.g. run sudo apt install build-essential followed by running the VBoxLinuxAdditions.run off the Guest Additions ISO.) Then I Tried opening the console just for fun. I was greeted by a warning that my Java was out of date (a Browser alert()), which I ignored.

After clicking on the console, I got a prompt to do something with the .jnlp java web start file. Ubuntu didn’t know what to do with it so I gave did some looking around and found this helpful stock overflow post. It says I should just install icedtea-netx (e.g. sudo apt install icedtea-netx). After installing it, then clicking on the console thumbnail again, I was again prompted to do something with the file, but was given the option to “Open with IcedTea Web Start (default)”. After clicking OK, I was greeted with a Java application asking it can trust the web start application as the digital signature could not be verified. Here are the two warning dialogues. I’d recommend doing this is a throw away VM, it’s old crufty software.

After clicking “Run” and “Yes” at the security prompts, I was greeted with the VGA console.

WinXP for the fun of it anyways?

I had initially set up Windows XP assuming I wouldn’t be able to get this to work on a modern linux distro. But as the above section demonstrates, it’s possible to do it on Ubuntu 20.04. But I figure It’d be fun to leave this section in here for the reader’s enjoyment. I learned my lesson: try stuff before you assume anything.

WinXP is available on archive.org free of change. I don’t know how the licensing works, though it has been on archive.org for many years with CD keys, so it must be fine, or Microsoft doesn’t care.

After installing Windows XP (and installing VirtualBox guest additions), you might need to install a browser that can access websites that use strict TLS/SSL ciphers for HTTPS. I found this very cool website that describes how to get started with Windows XP in 2021. Folks still run it unironically for fun and profit. Crazy! But I’m also that crazy, so let’s grab a good browser.

Enter Mypal. It has a cute mascot of a Raccoon. It also seemed to work more reliably than the version of Artic Fox I tried (it kept hanging ???). You can download it here from its GitHub releases page. Enjoy the following screenshot!

Now we have a modern-ish web browser we can access the webpages that offer Java installers for Windows XP. Unfortunately, Oracle seems to have removed all references to legacy Java builds that work on Windows XP. I get an error message when trying the old Java releases on Windows XP:

An hour of Google head-bashing later, I found this link to a questionable website called Filepuma that offers a usable Java installer. For the esteemed reader, here is the output of sha256sum:

43edfa7ecde47309cb7af2ce81dd856f7299e47ae26aa387fc5db541a016bea4  Java_Runtime_Environment_(32bit)_v7.0.5.exe

Running it, one gets the usual security warning followed by a setup screen. Success!

Finally, accessing the console

After a reboot, I fired up Internet Exploder again, logged into the IPMI webpage. After clicking on the console preview, the interactive VGA console automatically opened up in Java WebStart. Fanstatic! I was able to log in to the console and fix the networking issue. All from my desk, without touching the affected computer.

No need to reinstall your OS

Published: Thursday, Jul 8, 2021
Last modified: Thursday, Jul 8, 2021 (a1b05c7)

A fantastic “feature” of Linux, BSD, and even Windows 10 is you don’t really need to reinstall to migrate an installation to a new computer. A common misunderstanding is if you get a new PC, you must use the new OS install, or install a new copy of your OS. If you’re intending on replacing an existing PC (and disposing of or re-purposing the old one), there is probably no need to reinstall your OS and deal with user data migration.

On Linux if you rip a SSD out of any x86 box, it will probably boot on any other machine, including virtualization software. One example of this is the laptop I am currently typing away on. It is the third laptop for the same Linux installation. Each time I simply either moved the SSD over or duplicated the SSD onto another SSD. Sometimes I have to modify some configuration, as I use Gentoo on my workstation and laptop and it’s not going to do everything for me automagically (or otherwise surprise me with some unwelcome, opinionated defaults).

My NAS has been migrated between multiple rebuilds with no problem. Even switching between SAS Passthrough cards is not a problem. It runs Alpine… but the distro hardly matters in this discussion. All distros will handle booting on random equipment reasonably well. My Desktop has survived a complete rebuild (new everything) and many partial rebuilds. It’s the same install from years ago.

Sometimes on IRC I hear of people having the same install from ago. I think it might be more common in the Gentoo community, but I don’t see this being a distro specific thing, only that Gentoo is a bit less surprising when things break. Clear error messages and doesn’t hide it behind a bunch of tooling that takes a long time to learn.

Example migration

I recently swapped out my HP Elitebook 820 G4 for a XPS 13. Both devices have NVMe SSDs. I saw four options to perform the installation migration:

  1. Install the old SSD into the new laptop;
  2. Remove the old SSD. Install into a USB SSD enclosure. Then copy its contents over (using ddrescue) to the new SSD (using a LiveCD environment);
  3. Boot the old laptop with a LiveCD, make an image of the SSD onto an external drive, then boot the new laptop with a LiveCD, write the image onto the new SSD from the external drive;
  4. Or Boot both laptops with a LiveCD, network them, then the old SSD data to the new SSD on the other machine using the network.

I opted for option 4.

First step is to boot both laptops with live media. I prefer GRML because it’s pretty simple and no frills, yet it ships with normal desktop stuff such as audio, if you really need it. It has a boot option to load the entire live media into ram, thereby allowing you to remove the bootable media. (One could also PXE boot to achieve similar effect.) I simply booted each laptop from the same usb, using the option to load into memory. Next up was to set up a network. In this case I had 1Gb ethernet interfaces on both devices (the XPS 13 has ethernet via a USB-C dock). I simply ran grml-network on both, said no to DHCP, set each to a RFC 1918 address in the same configured subnet. Ping one host from the other and verify it works. Next is a bit muddy, because it in theory did not work perfectly, but it did achieve the intended results.

I ran nc -l 1234 | pv > /dev/nvme0n1 on the new machine. Then I ran pv /dev/nvme0n1 | nc 192.168.0.1 1234 on the donor machine. It took about 2.5 hours for coping an entire 1TB SSD. In retrospect I believe this could have been faster had I used 10Gb ethernet or a USB enclosure, but it got the job done.

One gotcha is when the imaging finished, nc/pv hung on both sides. I could tell they were finished because no more progress was being made. And they were at the progress of the size of the donor machine’s storage. This is not a good thing to experience, so any recommendations to get better nc ... | pv ... to exit is welcome 😁.

Next up was booting. I just rebooted and it worked. Note: I set up my grub installs to boot both with MBR BIOS systems and UEFI systems. So tell your friends, you don’t need to pick ‘n choose how to boot your PC, just boot it either way if you set up grub to support both.

Migrating Windows Installs

I don’t know if it is realistically feasible to migrate a pre-Windows 10 install, but with Windows 10, one can simply yank the OS disk and install it in the new PC. If you need to copy tho install onto a soldered SSD, or are upgrading the storage, you can just image the entire original storage device onto the destination storage device. One can use gparted to resize Windows. Surprisingly, Windows will handle this - previously Windows would get upset if the NTFS C: partition is resized without special treatment.

Alternatively, there are vendor tools that can streamline this process, such as Samsung Data Migration tool.1

One gotcha - even if you successfully migrate your Windows 10 install to a new box, you may have to reactivate the license… you’ll find out.

Some other gotchas

If you need to resize an install, it might be feasible without much trouble. If you’re growing a Linux install, usually one can just resize the partitions/block devices in gparted/parted, then run grow2fs(8) on the filesystems. If you’re using LUKS encryption it appears to take its size from the size of the block device it resides on. AS for LVM2, see pvresize(8).

Shrinking is a bit trickier because you will have to adjust the LVM2/LUKS/filesystems before shrinking the partition itself. Shrinking can be trickier because doing partition table math is hard. You’ll have to carefully ensure you shrink the filesystem, then shrink the block devices, and partitions in order. I haven’t quite figured it out but it feels like guesswork to get the sizes just right, so it might be easiest to shrink the filesystem, LVM2 physical volume, LUSK, etc to be extra small, but a bit bigger than whatever they contain. Then just “grow” each filesystem/block device after the partition is finally shrank. This way you don’t need to do exact math. Recap: Shrink everything be too small starting with the filesystem working your way out, then grow everything inside the partition with growing the filesystem last.

Backups! Backups! Backups! If you are doing anything with data, it doesn’t exist if you don’t have backups that you can restore from. Ideally you want at least two copies. When imaging anything, it’s usually a good idea to keep a copy of the original image until you’re satisfied with your handywork. Better safe than sorry. Data storage is cheap… or if you afford backups for your storage, you can’t afford your storage (as it doesn’t exist without backups).

Conclusion

I think my details were a bit streamlined, but I wanted to keep this short. If the reader takes anything away from this post, most modern OS installs are portable between hardware, and are easy to install onto new storage. Reinstalls shouldn’t be a thing of necessity, but an action of policy only - if you are re-purposing a machine, by all means reinstall, but no need to do it when migrating.

As always I should shout out a live media to use for this post — GRML linux is great for this.


  1. Thanks smokey991 for suggesting this. ↩︎

Freenode is dead, long live Freenode!

Published: Wednesday, May 26, 2021
Last modified: Thursday, Jul 8, 2021 (869bcd9)

Note: this is my OWN opinion and not representative of any community entity. This is a summary of what I’ve experienced since the Freenode takeover.

(The new Freenode is Libera.chat.)

Drama?

I prefer to not take sides in online drama, but I feel like I have to err on the side of not-nuking-and-paving IRC channels that have existed for decades.

Here’s the summary of what’s happened, from my perspective: Guy with a bunch of money takes over a community operated network. There was much controversy. I thought it wasn’t a big deal because these things happen, but now we are at an impasse. The Freenode staff is operating a bot that searches for channels with a topic that mentions the libera IRC network, then forcefully moves the offending top-level channels (that is with a single #) to unofficial channels (that is, with a ## prefix). This has the unfortunate effect of erasing the community each of those channels has built up.

Most of the popular channels that still exist are increasingly inactive, with #bash, #git, #python all hardly active these days (a couple messages an hour).

I’ve heard of a serious moderation mess in the Freenode support channels, with moderation being temporarily suspended because nobody wanted to censor the negative feedback towards Andrew’s new directions, but I try to stay far away from that, that’s not my fight.

What channels have been erased?

Disclaimer: I am assuming these are the actions of the present Freenode staff, and not a “rogue faction” caused by Andrew’s reckless takeover and splintering of the community. I think this is a reasonable assumption, but it is a pretty strong assumption nonetheless.

Any channel with mention of Libera.chat in its topic (e.g. “Hey everyone, we are transitioning to libera.chat!") is automatically taken over by the Freenode staff. The channel is then redirected to a newly created unofficial (## prefix) channel with the same name. This has the unfortunate effect of erasing the good will Freenode has built up over the decades with open source communities and developers. It also disrupts the community in each of these channels, in that bots, users who do not autojoin have to be manually reconfigured to move over to the replacement channel. Being kicked and banned automatically for participating is kind of lame, to say the least. Almost all the channels that were taken over this way have virtually zero presence in their unofficial replacement channel counterparts.

Here is a short list of channels purged this way:

  • #tnnt
  • #NetHack
  • #weechat
  • #haskell
  • #vim
  • #emacs
  • #sway
  • #k-9
  • #curl
  • #zig
  • #musl
  • #archlinux
  • #scheme
  • #irssi
  • #znc
  • #qutebrowser
  • Wikipedia channels (e.g. #wikipedia)
  • #curl

Voluntary Purge

Some other channels have decided to make the decision on the behalf of their users, to give an official message to their users where they have moved to. This has the advantage of users will know where to find the new community channels, unlike the above victims:

  • #gentoo channels (moved to Libera.chat)
  • #perl (moved to Libera.chat)
  • #nixos (moved to Matrix)

Still exist, but are transitioning

  • #lisp (They’re moving to “l i b e r a” — this circumvented their purge-bot’s detection)
  • #alpine channels (moved to oftc, hence they were not purged for mentioning libera.chat)

Some social media coverage

Closing remarks

I was okay with a little controversy, however, actively destroying channels that were mitigating the fear and uncertainly caused by the hostile takeover earlier this month does not send a good message. I’m sure everyone has a side in this discussion, but it’s hard to accept that Freenode in any capacity would dream of destroying channels like #curl, #haskell, #emacs, #weechat, #irssi, #archlinux. Andrew and friends are insane if they think anyone is going to put up with this misconduct.

Given that even the existent channels are all pretty quiet, I think it’s safe to say Freenode is dead, except for niche communities that have managed to survive this unfortunate turn of events.