07 Jan 2020

Switching website to GitLab Pages

Previously I detailed how I set up blog.winny.tech using GitHub for source code hosting and Caddy’s git plugin for deployment. This works well and I used a similar setup with my homepage. The downside is I host the static web content and I am tied to using Caddy.1 I imagine simpler is better, so I opted to host my static sites — https://winny.tech/ & https://blog.winny.tech/ — with GitLab pages.

What’s wrong with Caddy?

Caddy is very easy to get started with, but it has its own set of trade-offs. Over the last few years, I’ve noticed multiple hard-to-isolate performance quirks, some of which were likely related to the official Docker image. In particular, I had built a Docker image of Caddy with webdav support, and the overall performance tanked to seconds per request, even with webdav disabled. I still have no clue what happened there; instrumenting Caddy through Docker appeared nontrivial, so I gave up on webdav support, reverted to my old Docker based setup, and everything was fast, once again.

There is a good amount of inflexibility in Caddy, such as the git plugin’s limitation to deploy to a non-root folder of the web root. And its rewrite logic is usually what you want, but not nearly as flexible as nginx’s.

Asking questions on their IRC is usually met with no response of any kind, which indicates to me that the project’s community isn’t very active.

The move to Caddy v2 is unwelcoming; I don’t want to relearn yet another set of config files and quirks, especially weeding through the layer of configuration file format adapters and the abstracted-away configuration options, so I rather just use Certbot and some other HTTPD that won’t change everything for the fun of it.2

Until recently Caddy experimented with a pretty dubious monetizing strategy. HackerNoon published an article detailing how it worked. In short: they plastered text all over their website claiming you “need to buy a license” to use Caddy commercially, though that claim was never true. Caddy was always covered by Apache License 2.0. Instead, you needed a commercial license in the narrow use-case that your organization wants to use Caddy’s prebuilt release binaries as offered on their website. It is good they stopped this scheme, but it leaves a bad taste with the community, and with me, and discourages me from relying on the project moving forward.

Why GitLab Pages instead of GitHub Pages?

I have used both GitHub Pages and GitLab Pages in the past. My experience with GitHub Pages is it’s relatively inflexible and difficult to see what is going to be published, and has a CI/CD setup only useful for certain Jekyll based sites. GitLab pages, on the other hand, lets you set up any old Docker-based CI/CD workflow, so it is possible to render a blog with GitLab CI of any static site generating software. The IEEE-CS student chapter I am a part of does just this. We use a combination of static redirect sites and a Pelican-powered static website. There are a large number of example repositories for most of the popular ways to publish a static website, including Gatsby, Hugo, and Sphinx. Needless to say GitLab Pages puts GitHub pages to shame in terms of flexibility.

Setting up GitLab Pages

There are two steps in setting up GitLab Pages. These are the most important ideas related to GitLab pages; how to navigate the site is something the reader must experience for oneself. Nothing beats experimentation and reading the docs. Make sure to refer to the official GitLab Pages documentation for further details.

1) Getting GitLab Pages deploy your git repository

Before getting started, make sure GitLab Pages is activated for your project. Visit it via Settings → Pages on your project. Most of the Pages settings are rooted in that webpage.

How GitLab Pages CI/CD deploys your site is specific to your software or lack of software. If you are simply setting up a static website on GitLab Pages, a simple .gitlab-ci.yml will work for you:

pages:
  stage: deploy
  script:
  - mkdir .public
  - cp -rv -- * .public/  # Note the `--'
  - mv .public public
  artifacts:
    paths:
    - public
  only:
  - master

This simply tells GitLab CI/CD to copy everything not starting with a . into the public folder. By the way, one cannot change the public folder path. It does not appear possible to use something like artifacts: paths: ["."] to deploy the entire git repository.

There is a GitLab CI/CD YAML lint website3 (and web API). Additionally, there is a reference documentation for the .gitlab-ci.yml schema. Please note, it will often yield confusing error messages. For example it is invalid to omit a script key, but the error message is Error: root config contains unknown keys: pages. Take the error messages with a grain of salt.

Once you have what seems like the .gitlab-ci.yml that you want, commit it to your git repository, and push to GitLab. Check progress under CI/CD → Pipelines. If everything works out, you should be able to view the website on GitLab Page’s website — e.g. https://winny.tech.gitlab.io/blog.winny.tech. The format of the above url (visible in Settings → Pages) is https://<namespace>.gitlab.io/<project>. If you can’t view your website, check the CI/CD pipeline’s logs, and inspect the artifacts ZIP — which is also available from the CI/CD piplines page. Chances are you need to edit the .gitlab-ci.yml or tweak the scripts used in the YAML file.

2) Hosting the GitLab Pages site on your (sub-)domain

All the tasks in this section use Settings → Pages using the “New Domain” or “Edit” webpages.

To set up GitLab pages on your domain, you need to first prove ownership of that specific domain via a specially constructed TXT record, then configure that specific domain to point to GitLab Pages via a CNAME or A record. In general I recommend using an A record because you can stuff any other records you please on the same domain.

Simply add an A record on your DNS setup as so: yourdomain.com. A 35.185.44.232.4 If everything works, after the DNS updates it can take anywhere from seconds to the rest of your SOA TTL (Time-To-Live). Visiting your domain should now provide a GitLab Pages placeholder page with a 4xx error code.

Next prove to GitLab you own the domain. Create the TXT record as indicated in the GitLab Pages management website. The string to the left of TXT should be the name/subdomain, and the string to the right of TXT is the value. Alternately you can put the entire string into the value field of a TXT record (?!).

Note, the above two sub-steps are independent; one can validate the domain before adding the record to point it to GitLab, and vice versa.

GitLab Pages Gotchas

There are a few gotchas about GitLab Pages. Some of them are related to GitLab Pages users not being familiar with all of the DNS RFCs. Others are simply because GitLab Pages has quirks too.

CNAME on apex domain is a no-no

Make sure you do not use a CNAME record on the apex domain. Use an A record instead. Paraphrasing from the ServerFault answer: RFC 2181 clarifies a CNAME record can only coexist with records of types SIG, NXT, and KEY RR. Every apex domain must contain NS and SOA records, hence a CNAME on the apex domain will break things.

CNAME and TXT cannot co-exist

The above also is true for TXT and CNAME on the same subdomain. For example if one adds TXT somevaluehere and CNAME example.com to the same domain, say hello.example.com, things will not behave correctly.

If we have a look at the GitLab Pages admin page, the language is mildly confusing, stating “To verify ownership of your domain, add the above key to a TXT record within to your DNS configuration.” At first, I thought “somewhere in your configuration” means “place this entire string as the right hand side of a TXT record on any subdomain in your configuration”. This does work, as such I have

blog.winny.tech. IN A   35.185.44.232
blog.winny.tech. IN TXT "_gitlab-pages-verification-code.blog.winny.tech TXT gitlab-pages-verification-code=99da5843ab3eabe1288b3f8b3c3d8872"

But they probably didn’t mean that, Surely I should have this instead:

blog.winny.tech IN A   35.185.44.232
_gitlab-pages-verification-code.blog.winny.tech IN TXT gitlab-pages-verification-code=99da5843ab3eabe1288b3f8b3c3d8872

I feel a bit silly after realizing this is what the GitLab Pages folks intended for me to do, but it really was not clear to me, especially given how when clicking in the TXT record’s text-box it highlights the entire string, instead of allowing the user to copy the important bits (such as the TXT’s key) into whatever web management UI they might be using for DNS.

The feedback loop for activation of the domain is slow

It can take awhile for a domain to be activated by GitLab Pages after the initial deploy. Things to look for: you should get a GitLab Pages error page on your domain if you set up the CNAME or A record correctly. The error is usually “Unauthorized (401)”, but it can be other errors.

The other place to look is verify your domain is in the “Verified” state on the GitLab Pages admin website.

The feedback loop for activation of LetsEncrypt HTTPS is huge

Sometimes GitLab pages will seemingly never activate your LetsEncrypt support for HTTPS access. If this happens, a discussion suggests the best solution is to remove that domain from your GitLab Pages setup, and add it again. You will likely have to edit the TXT record used to claim domain ownership. This also worked for me, when experiencing the same issue.

Make sure to enable GitLab Pages for all users

Conclusion

GitLab pages isn’t perfect, but this should streamline what services my VPS hosts, and give me more freedom to fiddle with my VPS configuration and deployment. I look forward to rebuilding my VPS with cdist, ansible, or saltstack. While that happens, my website will be up thanks to GitLab pages. Also, I imagine GitLab Pages is a bit more resilient to downtime than a budget VPS provider.

The repositories with .gitlab-ci.yml files for both this site, and winny.tech are public on GitLab official hosting. Presently it is the simplest setup possible, simply deploying pre-generated content already checked into git, but the possibilities are endless.

Footnotes:

1

I could deploy my own webhook application server that GitHub/GitLab connects to, and have done so in the past, but every application I manage is another thing I have to well, ahem, manage (and fix bugs for).

2

There are some cool new features in Caddy 2, such as the ability to configure Caddy via a RESTful API and a sub-command driven CLI, but I don’t need additional features.

3

From the GitLab CI Linter’s old page “go to ‘CI/CD → Pipelines’ inside your project, and click on the ‘CI Lint’ button”. Or simply visit https://gitlab.com/username/project/-/ci/lint.

4

It’s a good idea to compare the mentioned IP address against what appears in the GitLab Pages Custom Domain management interface.

Tags: computing operations
25 Dec 2019

How to fix early framebuffer problems, or "Can I type my disk password yet??"

Most of my workstations & laptops require a passphrase typed in to open the encrypted root filesystem. So my steps to booting are as follows:

  1. Power on machine
  2. Wait for FDE passphrase prompt
  3. Type in FDE passphrase
  4. Wait for boot to complete and automatic XFCE session to start

Since I need to know when the computer is ready to accept the passphrase, it is important the framebuffer is usable during the early part of the boot. In the case of of HP Elitebook 820 G4, the EFI framebuffer does not appear to work, and I rather not boot in BIOS mode to get a functional VESA framebuffer. Making things more awkward, a firmware is needed when the i915 driver is loaded, or the framebuffer will not work either. (It’s not always clear if a firmware is needed, so one should run dmesg | grep -F firmware and check if firmware is being loaded.)

With this information, the problem is summarized to: “How do I ensure i915 is available at boot with the appropriate firmware?”. This question can be easily generalized to any framebuffer driver, as the steps are more-or-less the same.

Zeroth step: Do you need only a driver, or a driver with firmware?

IT is a good idea to verify if your kernel is missing a driver at boot, or is missing firmware or both. Boot up a Live USB with good hardware compatibility, such as GRML1 or Ubuntu’s, and let’s see what framebuffer driver our host is trying to use2:

$ dmesg | grep -i 'frame.*buffer'
[    4.790570] efifb: framebuffer at 0xe0000000, using 8128k, total 8128k
[    4.790611] fb0: EFI VGA frame buffer device
[    4.820637] Console: switching to colour frame buffer device 240x67
[    6.643895] i915 0000:00:02.0: fb1: i915drmfb frame buffer device

Se we can see the efifb is initially used for a couple seconds, then i915 is used for the rest of the computer’s uptime. Now let’s look at if firmware is necessary, first checking if modinfo(8) knows of any firmware:

$ modinfo i915 -F firmware
i915/bxt_dmc_ver1_07.bin
i915/skl_dmc_ver1_27.bin
i915/kbl_dmc_ver1_04.bin
... SNIP ...
i915/kbl_guc_33.0.0.bin
i915/icl_huc_ver8_4_3238.bin
i915/icl_guc_33.0.0.bin

This indicates this driver will load firmware when available, and if necessary for the particular mode of operation or hardware.

Now let’s look at dmesg to see if any firmware is loaded:

[    0.222906] Spectre V2 : Enabling Restricted Speculation for firmware calls
[    5.511731] [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[   25.579703] iwlwifi 0000:02:00.0: loaded firmware version 36.77d01142.0 op_mode iwlmvm
[   25.612759] Bluetooth: hci0: Minimum firmware build 1 week 10 2014
[   25.620251] Bluetooth: hci0: Found device firmware: intel/ibt-12-16.sfi
[   25.712793] iwlwifi 0000:02:00.0: Allocated 0x00400000 bytes for firmware monitor.
[   27.042080] Bluetooth: hci0: Waiting for firmware download to complete

Aha! So it appears we need i915/kbl_dmc_ver1_04.bin for i915. In the case case one doesn’t need firmware, it won’t show anything related to drm or a line with your driver name in it.

By the way, it is a good idea to check dmesg for hints about missing firmware, or alternative drivers, for example my trackpad is supported by both i2c and synaptics based trackpad drivers, and the kernel was kind enough to tell me.

First step: Obtain the firmware

On Gentoo install sys-kernel/linux-firmware. You will have to agree to some non-free licenses; nothing too inane, but worth mentioning. Now just run emerge -av sys-kernel/linux-firmware. (On other distros it might be this easy, or more difficult; for example—in my experience Debian does not ship every single firmware like Gentoo does, so YMMV.)

Second step, Option A: Compile firmware into your kernel

Since most of my systems run Gentoo, it is business as usual to deploy a kernel with most excess drivers disabled except for common hot-swappable components such as USB network interfaces, audio devices, and so on. For example, this laptop’s config was originally derived from genkernel’ stock amd64 config with most extra drivers disabled, then augmented with support for an Acer ES1-111M-C7DE, and finally with support for this Elitebook.

I had compiled the kernel with i915 support built into the image, as opposed to an additional kernel module. Unfortunately this meant the kernel is unable to load firmware from filesystem, because it appears only kernel modules can load firmware from filesystem. To work around this without resorting to making i915 a kernel module, we can include the drivers within the kernel image (vmlinuz). Including firmware and drivers both in the vmlinuz has a couple benefits. First it will always be available. There is no need to figure out how to load the driver and firmware from initrd, let alone getting the initrd generator one is using, to cooperate. A downside is it makes the kernel very specific to the machine, because perhaps a different Intel machine needs a different firmware file compiled in.

To achieve including the firmware in kernel, I set the following values in my kernel config (.config in your kernel source tree).

CONFIG_EXTRA_FIRMWARE="i915/kbl_dmc_ver1_04.bin"
CONFIG_EXTRA_FIRMWARE_DIR="/lib/firmware"

Note, if you’re using menuconfig, you can type /EXTRA_FIRMWARE (slash for search, then the text) followed by keyboard return to find where these settings exist in the menu system.

Then I verified i915 is indeed not a kernel module, but built into the kernel image (it would be m if it’s a module):

CONFIG_DRM_I915=y

After compiling & installing the kernel (and generating a dracut initrd for cryptsetup/lvm), I was able to reboot and get an early pre-mounted-root framebuffer on this device.

Second step, Option B: A portable kernel approach (using sys-kernel/vanilla-kernel)

I discovered the Gentoo devs have begun shipping an ebuild that builds and installs a kernel with a portable, livecd friendly config. In addition this package will optionally generates an initrd with dracut as a pkgpostinst step, making it very suitable as a replacement for users who just want a working kernel, and don’t mind a excessive compatibility (at a cost to size and build time).

This presents a different challenge, because while this package does allow the user to drop in their own .config, it is not very multiple-machine-deployment friendly to hard-code each individual firmware into the kernel. Instead we tell dracut to include our framebuffer driver. As mentioned above I found this computer uses the i915 kernel driver for framebuffer. Let’s tell dracut to include the driver:

cat > /etc/dracut.conf.d/i915.conf <<EOF
add_drivers+=" i915 "
EOF

Dracut is smart enough to pick up the firmware the kernel module needs, provided they are installed. To get an idea what firmware dracut will include, run modinfo i915 -F firmware which will print out a bunch of firmware relative paths.

After applying this fix, just regenerate your initrd using dracut; in my case I let portage do the work: emerge -1av sys-kernel/vanilla-kernel. Finally reboot.

Conclusion

Check dmesg. Always check dmesg. We found two ways to deploy firmware, in-kernel and in-initrd. The in-kernel technique is best for a device-specific kernel, the in-initrd is best for a portable kernel. I am a big fan of the second technique because it scales well to many machines.

I did not touch on the political side of using binary blobs. It would be nice to not use any non-free software, but I rather have a working system with a couple small non-free components, than a non-working system. Which is more valuable, your freedom, or reduced capacity of your tools?

Footnotes:

1

GRML is my favorite live media. It is simple, to the point, has lots of little scripts to streamline tasks such as setting up a wireless AP, a iPXE netboot environment, a router, installing debian, and so on. And Remastering is relatively straight forward. It also has a sane gui sutable for any machine (fluxbox).

2

Thanks to this post on Ask Ubuntu

Tags: gentoo linux computing
23 Nov 2019

Milwaukee Code Camp

This most recent weekend (November 16th) I attended the Milwaukee Code Camp and was pleased with the content. There was plenty of food, coffee, and give-aways.

The Talks

I attended five talks:

  1. starting an open source project (link) 1,
  2. how to manage work life balance as a software developer (link),
  3. getting started with Docker and Kubernetes2 (link),
  4. introduction to Terraform for cloud infrastructure management (link),
  5. and accessibility (a11y) on the modern web (link).

I am pleased to say the open source fellow recommended GPLv3, MIT/X, and Apache 2.0 (and choosealicense.com), so I have a lot of respect for him. I think a lesser open source evangelist would recommend one license, or strongly recommend one license. It really does depend on your project.

I was able to get Minikube set up during the Docker/k8s talk in five minutes. No surprises when installing it from the official Gentoo repository. Just follow the installed readme, run the commands… it's really quite easy to do. A friend commented it wasn't so simple on their system to install Minikube and get it working.

While I don't think I would use Terraform at this point, I have a good appreciation for when I might use it in the future. In addition, I found a Terraform provider for libvirt, so one could in theory provision their own cloud infrastructure on a simple libvirt cluster with Terraform. I believe this might be my first use-case for Terraform.

It was refreshing to see people talking about accessibility and the web. I am personally not a fan of the web stack for various reasons.3 But the reality is the technology won't go away, and the browser is the greatest common denominator both reaching users and simplifying the a project's platform support matrix. It was great to hear from somebody in industry "your website should be somewhat usable without javascript". I am really pleased to hear this; perhaps there is hope for the web ecosystem yet.

The Food

Sweet pastries, water, soda, coffee were available in the morning ad throughout the day. There may have been more comprehensive breakfast items (cereal?), though I was late for the first session. Lunch was Dominos pizza and brownies. There was no shortage of pizza. The coffee was catered from Panera Bread.

Conclusion

In addition to learning about Terraform, Kubernetes, and accessibility, I met a lot of cool people. I think this event was an overwhelming success. Thanks to everybody who organized this event.

Footnotes:

1

Slides here.

2

See the example Docker-ized and Kubernetes-ized application on BitBucket here.

3

This is a discussion in itself. For example: I think a large majority of the user and developer experience with web technologies does not adhere to the principle of least surprise, making it very frustrating for everyone involved.

Tags: community
18 Nov 2019

Won MSOE x Google Cloud Hackathon

About a fortnight ago (Nov 9th) I went to the MSOE x Google Cloud hackathon.1 There was pizza, soda, and Google Cloud gear. Each group was given a Google AIY Computer Vision kit to assemble, and build a proof of concept around.

The kit contained a Raspberry Pi Zero W, the Raspberry Pi Camera Add-on, a breakout board to provide simplified pin-outs for a button with an integrated light, an additional LED that mounted next to the camera to indicate if the camera was active, and a piezo buzzer.2 All these components fit into a carefully engineered cardboard box that folded onto itself, held together with adhesive tape. The assembled device was remarkably robust and easy to operate.

Challenges

We were among the first group to finish building the kit. It turned out the software on the included SD card was not exactly what we needed, and the SD card writing software for Windows (Etcher) was a bit unreliable and did not clearly indicate to the user of a successful write. After a second attempt we had bootable SD card.

The system took a couple minutes to boot and resize itself. Mind you, we did not have a Mini-HDMI to HDMI cable, nor a monitor to output the Raspberry Pi display. Thus we had to wait, chat, and eat pizza.

The next challenge was to “pair” the device with a wifi network so one could SSH into it. There is an android app for this, and at first we paired it with a spare android device acting as a hotspot. Unfortunately this configuration did not give us internet access when connected to this wireless access point. We were able to verify the device was working, SSH in, and inspect the images we took via the kit’s included camera.

We moved to a more central location, as the most cognitively demanding part was complete — construction of the device, and ensuring it works. This lead to more networking challenge. We wanted a way to network this device to the internet so we could not only log into it via SSH, but access PyPi from the device, and access StackOverflow from our laptops on the same network. With a little brainstroming we came up with this network topology:

msoe-network-topology.png

Yes, as you can see the path from the internet is (A) public wifi (B) my friend Karl’s android phone (C) his laptop via bluetooth tethering (D) finally a wifi network via his laptop’s built in wifi. We had an intemittent hiccup with nameserver configuration not set up correctly on the host wifi network — as such any DNS would not resolve. A quick tweak to Karl’s network manager settings mitigated this. And like that… we were networked together on a private wifi network complete with internet access.

There were some other techincal issues. Because of our network topology, the round trip time to the internet was very high, occasionally over a second from the pi, and pip has a relatively low timeout when installing stuff. The workaround was to tell pip to calm down, and be patient. I had installed tmux so we could share a session across the table (for pair programming), and the apt man-db triggers took around 5-15 minutes; with the crunch time we had, this felt unacceptable. The other technical issue we had was the fact the raspbian image starts up a lot of unnecessary services by default which eat into the rasperry pi zero w’s very limited memory. This caused pip to crash due to failure to allocate memory. We had to disable lightdm (GUI) and the default vision kit demo. Had this been a device I’d use for more than a couple hours, I’d go through and disable things like GIO services, and other bloat that we never would use.3

Meanwhile our other two group members worked on a proof of concept game. The idea the device comes up with a common word, and the user carries the device around, showing it various text on the wall or on paper. Using a cloud OCR service, it can recognize the words seen by the camera. It then will buzz happily when it is shown the correct word, or buzz sadly if shown the wrong word. Then the process repeats. It’s an “iSpy” with computer vision and words — a word hunt!

Completion

While our PoC was not deployed in time for presentations, we were about five to ten minutes away from setting it up, and demoing it. During the presentations, we found that many of the other groups experienced the same issues with the Google AIY Vision Kit — due to the Raspberry Pi platform, Raspbian, and the way one pairs it to wifi. At least one other group managed to get some non-default code running on their Pi. We were only given around 3-4 hours from start to finish, with a time loss factor due to slow internet speeds to download the initial SD card image, man-db triggers rendering the device unusable for awhile, and dealing with the lack of suitable networking configuration.

Given all these challenges, I think we did very well. As did every other group that participated. We did have competition for 1st place, because some of the other groups had PoC’s (though some did not get the Pi completely working) and others did get the Pi working but did not have a PoC. We were selected as 1st place, and each was given either a Google Home Mini or a Google Cloud hoodie. I went with the hoodie because I don’t want to use Google’s creepy spyware voice assistant. And the modding/reverse engineering community has done very little with this product; nobody has loaded custom firmware on it, for example.

Conclusion

Our issue in seeking out a usable networking topology had be thinking: if I simply had a device with two wifi radios, I could run one as an Acesss Point and the other connected to Public WiFi. This device would then yield this topology, which would be ideal for these sorts of impromptu projects and activities:

dual-wifi.png

In addition, it would be perfect for demonstrations of man in the middle attacks on public wifi, or experimenting with multipath tcp and wifi.

With these sorts of events that start early on a Saturday morning, it’s been useful to agree with a friend to attend the same event. That way both parties are more likely to show up, because it wouldn’t be very personable to cancel last minute. We also had a wonderful team. I later met up with most of the same team for another event (post incoming). Hackathons are a great way to meet new people. I enjoyed this event thoroughly.

See you at the next hackathon :)

Footnotes:

1

There isn’t a link online yet :(. I will update this note when I find a link to the event.

2

Full list of materials here.

3

Stay tuned as I explore why I find distros like Debian not ideal in practice in a future post ☺

Tags: community
06 Nov 2019

GDG Milwaukee 2019 DevFest - We participated!

I attended the GDG Milwaukee 2019 DevFest last Saturday. This was my second hackathon. Around 6-9 teams participated. We coded for six hours, and I learned a lot about team dynamics. We formed a team of eight participants. We encountered a couple significant challenges.

The stack matters

Initially we decided to use Python and the Django framework. This turned out to be a grave error, because picking up Django quickly while staying productive is challenging. This challenge is multiplied by unfamiliarity with MVC/MVP web frameworks.

A couple hours in we sat down and decided Django had to go. We realized two of us had prior experience with Flask. Combined with flaskapi this could be a wonderfully simple way to build a RESTful API backend.

For the frontend we used Create React App. I did not work with it much, but I found it easy to run, deploy, and tweak. I think it was a solid choice.

The hardest part of our stack was integrating our hand-rolled RESTful API into the React.js based frontend. In fact we weren't able to complete this, but we got really close. It was a lot of work to get as far as we did as a team.

Most of the other groups also used Python. Multiple groups used Django, and one group even used Django Rest Framework. They appeared to be facing the same challenge we were having with getting Django to do anything productive in the allotted time. I know at my next hackathon I won't be recommending Django to the uninitiated.

The winning team used Firebase. Every project I've seen done in Firebase was rapidly prototyped, indicating it is extremely suitable for hackathons. I have deep reservations about using a proprietary PaaS, but maybe I can put this concern aside for my next hackathon. :)

Java. At my university and most others in my area, Java is the first language we learn. In some cases it may be the only language one really learns well. In this light, a team member mentioned perhaps we could use Tomcat or some other Java web framework in future. This seems like a superb idea, I hope to explore this in an upcoming project with classmates.

Debian surprised me (again)

A funny experience during the hackathon was discovering a rather surprising patch Debian's virtualenv package ships. On CentOS 7, Gentoo, and possibly anything not Debian derived, running python3 -m virtualenv ./venv will create a Python3 virtual environment. This is not the case on Debian. Instead Debian will always default to installing python2 in the virtual environment. One must pass -p python3 to install python3. Sure seems wonky to me!

Demos never work

We almost had a working demo, but the part that got us was deployment. I spun up a Vultr VPS, installed npm, node, caddy, virtualenv. I got the API backend running, and built the Create React App pages, and tied it all together with a Caddyfile, but it simply wouldn't work. There was too many moving parts, and manual deployment was too tedious to get right within the time frame.

There is something to be said for working in containerized workflows; this would have been a non-issue. Drop in a docker-compose.yml into the project and just run docker-compose up. Next time :)

Teamwork is essential

We had a team of eight members, and it was challenging to find tasks for everybody. Given we had two major components — frontend and backend — and we had quite a few members who needed instruction to get started, it was challenging to give both the coding and instructing enough attention. In future I strive to have more balanced teams so everybody can feel more involved. Perhaps a good rule of thumb is to pair at most one beginner to one intermediate, never more than one.

Something else I think that will help is ensure nobody gets pigeonholed into managing the project; rather, share the responsibility. Project managers are likely not effective in a half-day Hackathon.

Keep morale up. Don't let negativity distract from the team tasks. Redirect negativity into going for a walk, playing video games, or simply taking a break. Make sure to smile.

Conclusion

I had fun at GDG Milwaukee DevFest. Good food, good company. We found our initial choice of Django was not productive in true hackathon spirit. Flask was better for this. Maybe next time we'll consider Firebase. If I had a nickle for every Debian patch that violated my own idea of least surprise, I'd have laundry money. Demos are hard, deployments should be automated or otherwise streamlined. Finally, teamwork is vital. Keep the team small, and make sure everybody has things to do.

See you next year GDG.

Tags: community
Other posts

© Winston Weinert (winny) — CC-BY-SA-4.0