30 Apr 2020

Linux dmesg --follow (-w) not working?

For a couple months now, I have noticed that running dmesg -w on my workstation does not appear to print out new kernel messages. In other words dmesg --follow "hangs". Additionally when running tail -f /var/log/kern.log to monitor new dmesg messages picked up by sysklogd (part of syslogng), the latest messages do not come through until sysklogd periodically "reopens" the /dev/kmsg kernel message buffer.

Why is this a problem?

This is a problem because I use the dmesg log to monitor important hardware related messages such as the kernel recognizing a USB device or diagnosing bluetooth/wifi issues. When I plug in a USB drive, the first thing I do is check dmesg for the following messages:

[10701.359834] usb 2-4.4: new high-speed USB device number 8 using ehci-pci
[10701.394801] usb 2-4.4: New USB device found, idVendor=12f7, idProduct=0313, bcdDevice= 1.10
[10701.394807] usb 2-4.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[10701.394810] usb 2-4.4: Product: MerryGoRound
[10701.394813] usb 2-4.4: Manufacturer: Memorex
[10701.394816] usb 2-4.4: SerialNumber: AAAAAAAAAAAA
[10701.395182] usb-storage 2-4.4:1.0: USB Mass Storage device detected
[10701.398885] scsi host7: usb-storage 2-4.4:1.0
[10702.401161] scsi 7:0:0:0: Direct-Access     Memorex  MerryGoRound     PMAP PQ: 0 ANSI: 0 CCS
[10702.401710] sd 7:0:0:0: Attached scsi generic sg6 type 0
[10702.651720] sd 7:0:0:0: [sde] 15654912 512-byte logical blocks: (8.02 GB/7.46 GiB)
[10702.652341] sd 7:0:0:0: [sde] Write Protect is off
[10702.652346] sd 7:0:0:0: [sde] Mode Sense: 23 00 00 00
[10702.652961] sd 7:0:0:0: [sde] No Caching mode page found
[10702.652965] sd 7:0:0:0: [sde] Assuming drive cache: write through
[10702.681473]  sde: sde1 sde2
[10702.684869] sd 7:0:0:0: [sde] Attached SCSI removable disk

This output reports that a USB device was detected, where it is plugged in, what is the vendor/product information, what USB speed it is using, the size of the storage, the device name (/dev/sde), and its partitions (/dev/sde1, /dev/sde2). There are a lot of other messages written out to dmesg, such as the kernel detecting a bad USB cable, segementation faults, and so on.

Given the importance of the above log output, I have developed a habit of running dmesg -w to monitor such kernel events. The -w tells dmesg to monitor for new messages. The long option is --follow.

In addition to dmesg -w not working as intended, syslogng log entries written to /var/log/kern.log are not written as they occur; instead the log is written in "bursts", which suggests sysklogd occasionally reopens /dev/kmsg, thereby reading in new log messages, but all the timestamps are the same time for each "burst" read.

Which of systems were affected?

I have two systems with a virtually identical OS installation; one is a workstation named snowcrash with an AMD FX-8350 on an ASRock M5A97 R2.0 motherboard; the other is a HP Elitebook 820 G4 named cyberdemon with an Intel Core i5-7300U. Curiously enough, the strange dmesg -w hang does not occur on cyberdemon, but does occur on snowcrash. Both hosts run Linux mainline, with both machines on 5.6.4. Looking through my /var/log/kern.log files, this behavior was apparent on a 5.4.25 kernel. As we will see later, this coincides with the affected versions that others have reported

Additionally, I asked my friend tyil who happens to also use an AMD FX-8350 with Gentoo to check for the bug; he also had the problem on 5.6.0.

Pinpointing the bug

First thing I did was find a way to reproduce the issue. I recorded an asciinema recording. You can watch it here. I then shared the recording on IRC, hoping somebody would know of a solution. I got some helpful and encouraging feedback, but nobody knew of this particular bug. See the recording below, or click here:

The next step was to figure out if there was something wrong with /bin/dmesg. Running strace -o dmesg-strace.log dmesg -w shows the follow pertinent lines:

openat(AT_FDCWD, "/dev/kmsg", O_RDONLY) = 3
lseek(3, 0, SEEK_DATA)                  = 0
read(3, "6,242,717857,-;futex hash table "..., 8191) = 79
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x16), ...}) = 0
openat(AT_FDCWD, "/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 4
fstat(4, {st_mode=S_IFREG|0644, st_size=26388, ...}) = 0
mmap(NULL, 26388, PROT_READ, MAP_SHARED, 4, 0) = 0x7fed92688000
close(4)                                = 0
futex(0x7fed925f9a14, FUTEX_WAKE_PRIVATE, 2147483647) = 0
write(1, "\33[32m[    0.717857] \33[0m\33[33mfut"..., 97) = 97
… SNIP …
read(3, "6,1853,137347289701,-;input: Mic"..., 8191) = 128
write(1, "\33[32m[137347.289701] \33[0m\33[33min"..., 140) = 140
read(3,

The last line indicates a pending read() that never completes. Note the 3 file description refers to the /dev/kmsg device. It appears nothing out of the ordinary occurs, except for the fact that the read() simply hangs.

I was a bit at a loss to explain the hanged read(). I was really lost honestly. So I went on, and inspected the changes to /bin/dmesg shipped by util-linux, and did not find any sign of significant changes. I did run dmesg from master just to be sure. See the commit log of dmesg.c here. Additionally I also searched the util-linux bug tracker and found nothing relevant.

Given I had no solution yet, I decided to resort to Googling things, hoping somebody had discussed this bug before. Keywords I tried are:

  • dmesg follow no longer working
  • dmesg kmsg no more messages
  • linux kmsg read hang
  • "/dev/kmsg" hang
  • "dmesg -w" hangs

None of these came up with anything useful. I was using DuckDuckGo mainly, with some Google queries sprinkled on top.

I then visited the torvalds/linux GitHub repository, searched for "kmsg", and did not find a commit that looked like a fix. I picked on through reading commits that /dev/kmsg is written to via the printk functions, so on a whim I decided to look changes made to kernel/printk/printk.c. Reading through the commit logs of printk.c, I realized the last commit is likely the fix:

commit ab6f762f0f53162d41497708b33c9a3236d3609e
Author: Sergey Senozhatsky <protected@email>
Date:   Tue Mar 3 20:30:02 2020 +0900

    printk: queue wake_up_klogd irq_work only if per-CPU areas are ready
    
    printk_deferred(), similarly to printk_safe/printk_nmi, does not
    immediately attempt to print a new message on the consoles, avoiding
    calls into non-reentrant kernel paths, e.g. scheduler or timekeeping,
    which potentially can deadlock the system.
    
    Those printk() flavors, instead, rely on per-CPU flush irq_work to print
    messages from safer contexts.  For same reasons (recursive scheduler or
    timekeeping calls) printk() uses per-CPU irq_work in order to wake up
    user space syslog/kmsg readers.
    
    However, only printk_safe/printk_nmi do make sure that per-CPU areas
    have been initialised and that it's safe to modify per-CPU irq_work.
    This means that, for instance, should printk_deferred() be invoked "too
    early", that is before per-CPU areas are initialised, printk_deferred()
    will perform illegal per-CPU access.
    
    Lech Perczak [0] reports that after commit 1b710b1b10ef ("char/random:
    silence a lockdep splat with printk()") user-space syslog/kmsg readers
    are not able to read new kernel messages.
    
    The reason is printk_deferred() being called too early (as was pointed
    out by Petr and John).
    
    Fix printk_deferred() and do not queue per-CPU irq_work before per-CPU
    areas are initialized.
    
    Link: https://lore.kernel.org/lkml/aa0732c6-5c4e-8a8b-a1c1-75ebe3dca05b@camlintechnologies.com/
    Reported-by: Lech Perczak <protected@email>
    Signed-off-by: Sergey Senozhatsky <protected@email>
    Tested-by: Jann Horn <protected@email>
    Reviewed-by: Petr Mladek <protected@email>
    Cc: Greg Kroah-Hartman <protected@email>
    Cc: Theodore Ts'o <protected@email>
    Cc: John Ogness <protected@email>
    Signed-off-by: Linus Torvalds <protected@email>

Unfortunately my understanding of the linux kernel architecture is not comprehensive, let alone competent, the commit message describes

  1. syslog/kmsg readers — which includes dmesg and syslogng,
  2. certain functions don't immediate attempt to print a new message to console,
  3. and syslog/kmsg readers might not wake up.

Indeed it's a bit hard for me to wrap my minimal kernel understanding around, however, reading the linked email list thread clears things up significantly:

After upgrading kernel on our boards from v4.19.105 to v4.19.106 we found out that syslog fails to read the messages after ones read initially after opening /proc/kmsg just after booting. I also found out, that output of 'dmesg –follow' also doesn't react on new printks appearing for whatever reason - to read new messages, reopening /proc/kmsg or /dev/kmsg was needed. I bisected this down to commit 15341b1dd409749fa5625e4b632013b6ba81609b ("char/random: silence a lockdep splat with printk()"), and reverting it on top of v4.19.106 restored correct behaviour.

— Lech Perczak

Now that, sounds like the issue I'm having! The thread also discusses the bug is apparent on 4.19.106 (fixed in 4.19.107 — see this commit), and affects users of 5.5.9, 5.5.15, 5.6.3 (see the PATCHv2 thread).

Applying the patch

Next step is to apply the patch in order to test and verify this fixes the issue.

Since Fall last year, I have used sys-kernel/vanilla-kernel to compile, install, and create an initramfs for my two machines. This is a great ebuild because it uses a kernel .config based off of Archlinux's, so it is compatible with most machines. It is also streamlined in that it does all the work for you — no more manually configuring and remembering which make invocations are necessary to update the kernel. It's not hard to get right, but it's not particularly interesting in my use-case. Additionally, using sys-kernel/vanilla-kernel, the kernel & its modules are now packaged, and can be distributed to my other machine as a binpkg. This streamlines deployment significantly.

In order to add the patch to this ebuild, I simply have to drop the patch file into /etc/portage/patches/sys-kernel/vanilla-kernel. In my case I chose to drop it in /etc/portage/patches/sys-kernel/vanilla-kernel:5.6.4 because I rather the patch only be applied no the current kernel I have installed, than all versions of sys-kernel/vanilla-kernel. This ensures when I upgrade to to the upcoming 5.7 release (which has the fix included), the patch won't be applied and emerge won't fail due to the patch not being applied cleanly.

The commands (commit to my /etc/portage):

mkdir -p /etc/portage/patches/sys-kernel/vanilla-kernel:5.2.6
curl -o /etc/portage/patches/sys-kernel/vanilla-kernel:5.2.6/fix-dmesg--follow.patch \
        https://github.com/torvalds/linux/commit/ab6f762f0f53162d41497708b33c9a3236d3609e.patch
emerge -1av sys-kernel/vanilla-kernel:5.2.6

An hour later and the kernel is installed. After the reboot, indeed dmesg -w works once again! And the log messages in /var/log/kern.log have timestamps that correctly reflect the kernel time!

Conclusion

Even kernels have regressions. As discussed on IRC, I was reminded that the kernel project is not responsible for the userland, so it's possible such testcases might not be on the radar of most kernel developers. Perhaps it's the distros' responsibilities to execute integrated system testing to catch bugs like this. In any case it is still a surprise to see such a regression occur. We like to think of the kernel as this infallible magical machine that doesn't break except when you do something patently wrong, but this isn't really the case. We're all human.

I want to thank Tyil, Sergey (the patch author), Lech (the bug reporter) and some folks from the #linux IRC channel for helping me pinpoint this issue. The reader may think this is a lot of effort to go through to fix such a simple bug — but it's really important for the kernel to work — if the kernel misbehaves, anything is up for grab. It's not like your bug-laden browser product we accept will have crashing bugs in it — if the kernel crashes or misbehaves, the ramifications are almost as bad as if the hardware is failing — you'll lose your application data, productivity, and trust in the operating system itself.

It is important to mention LTS (Long term support) kernels exist. Given the amount of trouble I went to address this issue, and the fact I rather not have things breaking, I don't think I should be running a mainline kernel at the moment. Perhaps I can install both side by side. then pick 'n choose which kernel to use de jure.

I am very interested to hear of you, the reader's, suggestions for kernel maintenance and version selection strategies. You can find my contact details at https://winny.tech/ . Thank you for reading.

Tags: linux computing
24 Apr 2020

Debugging Zathura, GTK (don't forget about seccomp)

Zathura is a fantastic PDF viewer. It also supports Postscript, DjVu, and Comicbook archive. In particular it supports using mupdf for the backend, so it's rather fast (unlike poppler, used by evince and friends). Here is a screenshot of Zathura:

zathura.png
Figure 1: screenshot of zathura

Now that I've introduced Zathura. I want to talk about a problem I had recently. I wanted to print a document a couple weeks ago, but found whenever I issued a :print command in Zathura, the program would crash. I got this error in dmesg:

[94592.482544] zathura[26424]: segfault at 201 ip 00007f0bc27d0086 sp 00007ffeada0d0d8 error 4 in libc-2.29.so[7f0bc2752000+158000]
[94592.482557] Code: 0f 1f 40 00 66 0f ef c0 66 0f ef c9 66 0f ef d2 66 0f ef db 48 89 f8 48 89 f9 48 81 e1 ff 0f 00 00 48 81 f9 cf 0f 00 00 77 6a <f3> 0f 6f 20 66 0f 74 e0 66 0f d7 d4 85 d2 74 04 0f bc c2 c3 48 83

Lets get a crash dump

I spent a bunch of time trying to get crash dumps from Zathura, and was largely unsuccessful, until I realized the wonkiness I was dealing with (see below).

Try to run Zathura in GDB

First I tried getting a backtrace directly from gdb. It appears to run, but zathura does not create a window:

winston@snowcrash ~ $ gdb --args zathura ~/docs/uni/classes/cs-655/handouts/spim_documentation.pdf 
Reading symbols from zathura...
(gdb) run
Starting program: /usr/bin/zathura /home/winston/docs/uni/classes/cs-655/handouts/spim_documentation.pdf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7ffff5f26700 (LWP 18882)]
[New Thread 0x7ffff5725700 (LWP 18883)]

Cannot find user-level thread for LWP 18744: generic error
(gdb)

The error message Cannot find user-level thread for LWP 18744: generic error is mentioned on the Sourceware Wiki. The Wiki FAQ suggests I may have a mismatch between libthread_db.so.1 and libpthread.so.0 or am using a 64-bit debugger with a 32-bit program. Both zathura and gdb are amd64 programs on my box. And I only have one version of amd64 glibc installed. Given the facts, it seemed like I was dealing with a different problem.

What's more is I tested running a program in gdb, in my case cat, and it worked fine:

winston@snowcrash ~ $ gdb cat
Reading symbols from cat...
(gdb) run
Starting program: /bin/cat 
^C
Program received signal SIGINT, Interrupt.
0x00007ffff7eb5cb5 in __GI___libc_read (fd=0, buf=0x7ffff7fb0000, nbytes=131072) at ../sysdeps/unix/sysv/linux/read.c:26
26        return SYSCALL_CANCEL (read, fd, buf, nbytes);

Try to attach a Zathura process in GDB

When attaching GDB to a process, make sure you have permission to do so, out of the box most distros limit debuggers to either attach to child processes or only if gdb is ran as root. In any case one can run sysctl kernel.yama.ptrace_scope=0 to temporarily loosen restrictions to allow attaching gdb to any process of the same user. See ptrace(2) and grep for ptrace_scope.

Now that gdb can attach to any other processes I own, I tried to attach to zathura, without any success:

inston@snowcrash ~ $ gdb -p 3541 zathura
Reading symbols from zathura...
Attaching to program: /usr/bin/zathura, process 3541
ptrace: Operation not permitted.
(gdb) 

Indeed this also worked fine with cat:

winston@snowcrash ~ $ gdb -p 6885 cat
Reading symbols from cat...
Attaching to program: /bin/cat, process 6885
Reading symbols from /lib64/libc.so.6...
Reading symbols from /usr/lib/debug//lib64/libc-2.29.so.debug...
Reading symbols from /lib64/ld-linux-x86-64.so.2...
Reading symbols from /usr/lib/debug//lib64/ld-2.29.so.debug...
0x00007f93b5fa4cb5 in __GI___libc_read (fd=0, buf=0x7f93b609f000, nbytes=131072) at ../sysdeps/unix/sysv/linux/read.c:26
26        return SYSCALL_CANCEL (read, fd, buf, nbytes);
(gdb) 

Try to get Zathura to dump core

I moved on to the next approach to get a backtrace — write core files. First I'll describe what that entails on my setup:

Enabling core dumps

On my setup, relatively vanilla Gentoo with OpenRC, it is relatively straight forward to enable this — just create /etc/security/limits.d/core.conf with the single line (see limits.conf(5)):

*             soft      core       unlimited

And relogin. Verify that the output of ulimit -a shows unlimited core file size.

winston@snowcrash ~ $ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63422
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63422
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The second part is ensuring sysctl kernel.core_pattern is set so something reasonable. If it's a pipeline (first character is a |), make sure you understand what that pipeline does, or set it to a simple filename pattern. More information in core(5). A good file pattern might be %e.%h.%t.core, which produces core files such as cat.snowcrash.1586300242.core. Time can be converted into a human readable form with date -d@1586300242.

winston@snowcrash ~ $ sudo sysctl kernel.core_pattern=%e.%h.%t.core
kernel.core_pattern = %e.%h.%t.core
winston@snowcrash ~ $ cat
^\Quit (core dumped)
winston@snowcrash ~ $ ls *.core
cat.snowcrash.1586300242.core
winston@snowcrash ~ $ coretime() { date -d @"$(cut -d. -f3 <<<"$1")"; }
winston@snowcrash ~ $ coretime cat.snowcrash.1586300242.core 
Tue 07 Apr 2020 05:57:22 PM CDT

Getting a core dump

I fired up Zathura for what felt like the tenth time, and triggered the bug, but indeed, no core dump! I even tried running zathura, and instead sending SIGQUIT (^\ — Control-Backslash in most terminals) which should cause the process to dump core, but to no avail.

In the above shell session, I demonstrated that I was able to dump core with cat, so indeed core dumps are enabled.

Investigating why I can't get a crash dump

This feels like madness. There is no obvious reason why I can't get a backtrace via any of the above techniques. So I took a deep breath, and grabbed the source code, thinking they must be doing some a bit too clever for my liking.

Getting the source

On Gentoo I usually do something like the following to grab program source:

winston@snowcrash ~ $ ebuild $(equery w zathura) prepare
 * zathura-0.4.5.tar.gz BLAKE2B SHA512 size ;-) ...                                                                     [ ok ]
 * checking ebuild checksums ;-) ...                                                                                    [ ok ]
 * checking miscfile checksums ;-) ...                                                                                  [ ok ]
>>> Unpacking source...
>>> Unpacking zathura-0.4.5.tar.gz to /var/tmp/portage/app-text/zathura-0.4.5/work
>>> Source unpacked in /var/tmp/portage/app-text/zathura-0.4.5/work
>>> Preparing source in /var/tmp/portage/app-text/zathura-0.4.5/work/zathura-0.4.5 ...
>>> Source prepared.

Scanning the source

A quick scan of the source tree yields some most interesting files — including some that will become more interesting as you read on:

winston@snowcrash .../work/zathura-0.4.5 $ grep -riF ptrace .
./zathura/seccomp-filters.c:  /* prevent escape via ptrace */
./zathura/seccomp-filters.c:  DENY_RULE(ptrace);
./zathura/seccomp-filters.c:  /* prevent escape via ptrace */

Notice the filename. It appears Zathura utilizes seccomp, and somehow messes about with debuggers use of ptrace(). Here is a tree of the files I'll be walking through:

winston@snowcrash .../work/zathura-0.4.5 $ tree -L 2 -F \
> -P 'meson*|README|AUTHORS|LICENSE|main.[ch]|*seccomp*.[ch]|zathura.[ch]|config.[ch]'
.
├── AUTHORS
├── data/
│   ├── icon-128/
│   ├── icon-16/
│   ├── icon-256/
│   ├── icon-32/
│   ├── icon-64/
│   └── meson.build
├── doc/
│   ├── api/
│   ├── configuration/
│   ├── installation/
│   ├── man/
│   ├── meson.build
│   └── usage/
├── LICENSE
├── meson.build
├── meson_options.txt
├── po/
│   └── meson.build
├── README
├── subprojects/
├── tests/
│   └── meson.build
└── zathura/
    ├── config.c
    ├── config.h
    ├── main.c
    ├── seccomp-filters.c
    ├── seccomp-filters.h
    ├── zathura.c
    └── zathura.h

16 directories, 16 files

Where Seccomp is used in the code

Indeed if we look in seccomp-filters.c it has a couple lines that suggest zathura prevents dumping core & using ptrace():

#define ADD_RULE(str_action, action, call, ...)                                \
  do {                                                                         \
    seccomp_rule_add(ctx, action, SCMP_SYS(call), __VA_ARGS__);                \
  } while (0)

#define DENY_RULE(call) ADD_RULE("kill", SCMP_ACT_KILL, call, 0)

int
seccomp_enable_basic_filter(void)
{
  /* prevent escape via ptrace */
  if (prctl(PR_SET_DUMPABLE, 0, 0, 0, 0)) {
    girara_error("prctl PR_SET_DUMPABLE");
    return -1;
  }
}

Please note I tidied up the code for clarity. Looking in prctl(2) we can see the prctl(PR_SET_DUMPABLE, 0, 0, 0, 0) not only prevents core dumps, but prevents processes from attaching to Zathura to debug it.

Now to figure out how it's called. Take a look at zathura.c.

bool
zathura_init(zathura_t* zathura)
{
#ifdef WITH_SECCOMP
  /* initialize seccomp filters */
  switch (zathura->global.sandbox) {
    case ZATHURA_SANDBOX_NONE:
      girara_debug("Sandbox deactivated.");
      break;
    case ZATHURA_SANDBOX_NORMAL:
      girara_debug("Basic sandbox allowing normal operation.");
      if (seccomp_enable_basic_filter() != 0) {
        girara_error("Failed to initialize basic seccomp filter.");
        goto error_free;
      }
      break;
    case ZATHURA_SANDBOX_STRICT:
      girara_debug("Strict sandbox preventing write and network access.");
      if (seccomp_enable_strict_filter() != 0) {
        girara_error("Failed to initialize strict seccomp filter.");
        goto error_free;
      }
      break;
  }
#endif
}

In the zathura_init procedure, seccomp is conditionally compiled in using an an #ifdef check. It becomes apparent there are three sandbox modes supported by Zathura. Next let's see where zathura_init() is called in main.c:

static zathura_t*
init_zathura(const char* config_dir, const char* data_dir,
             const char* cache_dir, const char* plugin_path, char** argv,
             const char* synctex_editor, Window embed)
{
  /* create zathura session */
  zathura_t* zathura = zathura_create();
  if (zathura == NULL) {
    return NULL;
  }

  /* Init zathura */
  if (zathura_init(zathura) == false) {
    zathura_free(zathura);
    return NULL;
  }

  return zathura;
}

/* main function */
GIRARA_VISIBLE int
main(int argc, char* argv[])
{
  /* CLI parsing and initialization */

  /* Create zathura session */
  zathura_t* zathura = init_zathura(config_dir, data_dir, cache_dir,
                                    plugin_path, argv, synctex_editor, embed);

  /* More initialization logic */

  /* run zathura */
  gtk_main();

  /* free zathura */
  return ret;
}


The program's entry point, main() calls init_zathura(), which itself calls zathura_init(), and then calls into seccomp_enable_*_filter(). This makes it clear that Zathura always initializes sandboxing on startup, unless zathura->global.sandbox is ZATHURA_SANDBOX_NONE.

If one looks in the top level meson.build we can see where the WITH_SECCOMP proprocessor definition comes from:

if seccomp.found()
  build_dependencies += seccomp
  defines += '-DWITH_SECCOMP'
  additional_sources += files('zathura/seccomp-filters.c')
endif

Now comes the matter how does one debug this application? Initially I succeeded by configuring Gentoo to not use seccomp with Zathura. After a second look, there appears to be a sandbox configuration option. In the next few sections I explain how to manually disable seccomp with both Gentoo USE flags, and by configuring zathura at runtime.

Disabling Seccomp by USE flag

Taking a closer look at the app-text/zathura package in Gentoo's ebuild repository, there is a seccomp USE flag.

winston@snowcrash ~ $ eix -e app-text/zathura
[I] app-text/zathura
     Available versions:  0.4.3^t 0.4.4^t{tbz2} (~)0.4.5^t{tbz2} **9999*l^t {doc +magic seccomp sqlite synctex test}
     Installed versions:  0.4.5^t{tbz2}(05:21:00 PM 04/07/2020)(doc magic seccomp -sqlite -synctex -test)
     Homepage:            http://pwmt.org/projects/zathura/
     Description:         A highly customizable and functional document viewer

Let's disable this seccomp USE flag:

snowcrash ~ # echo 'app-text/zathura -seccomp' >> /etc/portage/package.use/zathura
snowcrash ~ # emerge -1av app-text/zathura

These are the packages that would be merged, in order:

Calculating dependencies              ... done!                         
[ebuild   R   ~] app-text/zathura-0.4.5::gentoo  USE="doc magic -seccomp* -sqlite -synctex -test" 0 KiB

Total: 1 package (1 reinstall), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] 

With Zathura rebuilt without seccomp support, I am able to attach a debugger. Success!

Disabling Seccomp via configuration option

After reviewing zathura's configuration code, I found there is a sandbox option that can be configured in one's zathurarc. It was not mentioned in the zathura(1) manpage, nor its --help text. I discovered it in the README. Later I also found it mentioned in the zathurarc(5) manpage. As such, heed this friendly reminder— make sure to read the README, and make sure to read the related manpages in SEE ALSO section of a given manpage!

Back to the matter at hand. Looking at config.c:

static void
cb_sandbox_changed(girara_session_t* session, const char* UNUSED(name),
                   girara_setting_type_t UNUSED(type), const void* value, void* UNUSED(data))
{
  g_return_if_fail(value != NULL);
  g_return_if_fail(session != NULL);
  g_return_if_fail(session->global.data != NULL);
  zathura_t* zathura = session->global.data;

  const char* sandbox = value;
  if (g_strcmp0(sandbox, "none") == 0) {
    zathura->global.sandbox = ZATHURA_SANDBOX_NONE;
  } else if (g_strcmp0(sandbox, "normal") == 0)  {
    zathura->global.sandbox = ZATHURA_SANDBOX_NORMAL;
  } else if (g_strcmp0(sandbox, "strict") == 0) {
    zathura->global.sandbox = ZATHURA_SANDBOX_STRICT;
  } else {
    girara_error("Invalid sandbox option");
  }
}

void
config_load_default(zathura_t* zathura)
{
  girara_session_t* gsession = zathura->ui.session;

  /* default to no sandbox when running in WSL */
  const char* string_value = running_under_wsl() ? "none" : "normal";
  girara_setting_add(gsession, "sandbox",
                     string_value, STRING, true,
                     _("Sandbox level"), cb_sandbox_changed,
                     NULL);
}

Now we know there is an event listener for the sandbox configuration option. I know I skipped a few steps, but the pattern is pretty clear for my purposes. After adding set sandbox none to my ~/.config/zathura/config, Zathura was able to start up without a sandbox, and I was able to attach a debugger.

Getting a more informative backtrace

Now, with seccomp disabled I was able to get a crash dump:

winston@snowcrash ~ $ gdb --args zathura ~/docs/uni/classes/cs-655/handouts/spim_documentation.pdf 
Reading symbols from zathura...
(gdb) run
Starting program: /usr/bin/zathura /home/winston/docs/uni/classes/cs-655/handouts/spim_documentation.pdf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7ffff5f73700 (LWP 15633)]
[New Thread 0x7ffff5772700 (LWP 15634)]

(zathura:15629): dbind-WARNING **: 23:49:52.224: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
[New Thread 0x7fffe52b7700 (LWP 15639)]
[New Thread 0x7fffe4ab6700 (LWP 15645)]
[New Thread 0x7fffcbfff700 (LWP 15646)]

Thread 1 "zathura" received signal SIGSEGV, Segmentation fault.
__strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
120             movdqu  (%rax), %xmm4
(gdb) bt
#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
#1  0x00007ffff722753d in g_strjoinv () at /usr/lib64/libglib-2.0.so.0
#2  0x00007fffec65e31b in avahi_service_resolver_cb () at /usr/lib64/gtk-3.0/3.0.0/printbackends/libprintbackend-cups.so
#3  0x00007ffff73d4973 in g_task_return_now () at /usr/lib64/libgio-2.0.so.0
#4  0x00007ffff73d531d in g_task_return.part () at /usr/lib64/libgio-2.0.so.0
#5  0x00007ffff7429f0f in g_dbus_connection_call_done () at /usr/lib64/libgio-2.0.so.0
#6  0x00007ffff73d4973 in g_task_return_now () at /usr/lib64/libgio-2.0.so.0
#7  0x00007ffff73d49a9 in complete_in_idle_cb () at /usr/lib64/libgio-2.0.so.0
#8  0x00007ffff72064ef in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0
#9  0x00007ffff72068c0 in g_main_context_iterate.isra () at /usr/lib64/libglib-2.0.so.0
#10 0x00007ffff7206bd3 in g_main_loop_run () at /usr/lib64/libglib-2.0.so.0
#11 0x00007ffff796a105 in gtk_main () at /usr/lib64/libgtk-3.so.0
#12 0x0000555555561871 in main ()
(gdb) frame 1
#1  0x00007ffff722753d in g_strjoinv () from /usr/lib64/libglib-2.0.so.0
(gdb) list
115     #ifdef AS_STRNLEN
116             andq    $-16, %rax
117             FIND_ZERO
118     #else
119             /* Test first 16 bytes unaligned.  */
120             movdqu  (%rax), %xmm4
121             PCMPEQ  %xmm0, %xmm4
122             pmovmskb        %xmm4, %edx
123             test    %edx, %edx
124             je      L(next48_bytes)

Notice how the frame's listing shows assembly instructions. It looks like we are missing debug symbols. Additionally, it would be nice to have installed sources, because the debugger can show us line-for-line backtraces and will make it easy to single-step to the crash

Installing debug symbols on Gentoo

On gentoo one can use equery b to discover what package owns a particular file:

winston@snowcrash ~ $ for f in /usr/lib64/libglib-2.0.so.0 \
> /usr/lib64/gtk-3.0/3.0.0/printbackends/libprintbackend-cups.so \
> /usr/lib64/libgio-2.0.so.0 /usr/lib64/libglib-2.0.so.0 \
> /usr/lib64/libgtk-3.so.0; do 
>     equery -q b $f
> done | sort -u
dev-libs/glib-2.60.7-r2
x11-libs/gtk+-3.24.13

I came up with the following packages to install debug symbols for:

  • dev-libs/glib
  • x11-libs/gtk+:3
  • and app-text/zathura for good measure.

Using /etc/portage/env/debugsyms and /etc/portage/env/installsources — Portage environment files loosely based off the Gentoo Wiki — I can simply add the following lines to my /etc/portage/package.env/:

dev-libs/glib debugsyms installsources
x11-libs/gtk+:3 debugsyms installsources
app-text/zathura debugsyms installsources

And then I manually re-emerged each package, because unfortunately Portage does not appear to consider environment files when determining when to rebuild packages.

snowcrash ~ # emerge -1av app-text/zathura dev-libs/glib x11-libs/gtk+:3

These are the packages that would be merged, in order:

Calculating dependencies           ... done!                    
[ebuild   R    ] dev-libs/glib-2.60.7-r2:2::gentoo  USE="dbus debug* (mime) xattr -fam -gtk-doc (-selinux) -static-libs -systemtap -test -utils" ABI_X86="32 (64) (-x32)" 0 KiB
[ebuild   R    ] x11-libs/gtk+-3.24.13:3::gentoo  USE="X cups examples introspection xinerama (-aqua) -broadway -cloudprint -colord -gtk-doc -test -vim-syntax -wayland" ABI_X86="(64) -32 (-x32)" 0 KiB
[ebuild   R   ~] app-text/zathura-0.4.5::gentoo  USE="doc magic -seccomp -sqlite -synctex -test" 0 KiB

Total: 3 packages (3 reinstalls), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] 

Portage installed the source code under /usr/src/debug/${CATEGORY}/${PF}, where PF is the full package name, version, and revision, such as /usr/src/debug/x11-base/xorg-server-1.20.5-r2. Debug symbols will be installed under /usr/lib/debug.

A better backtrace

After getting the debug symbols & sources installed, I now get the following backtrace:

winston@snowcrash ~ $ gdb --args zathura ~/docs/uni/classes/cs-655/handouts/spim_documentation.pdf 
Reading symbols from zathura...
Reading symbols from /usr/lib/debug//usr/bin/zathura.debug...
(gdb) run
Starting program: /usr/bin/zathura /home/winston/docs/uni/classes/cs-655/handouts/spim_documentation.pdf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7ffff5f62700 (LWP 12321)]
[New Thread 0x7ffff5761700 (LWP 12322)]
[New Thread 0x7fffe52b7700 (LWP 12329)]
[New Thread 0x7fffe4ab6700 (LWP 12333)]
[New Thread 0x7fffcbfff700 (LWP 12334)]

Thread 1 "zathura" received signal SIGSEGV, Segmentation fault.
__strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
120             movdqu  (%rax), %xmm4
(gdb) bt
#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
#1  0x00007ffff721680d in g_strjoinv (separator=separator@entry=0x7fffec702546 "-", str_array=str_array@entry=0x555555e56ac0)
    at ../glib-2.60.7/glib/gstrfuncs.c:2585
#2  0x00007fffec6fd31b in avahi_service_resolver_cb
    (source_object=<optimized out>, res=<optimized out>, user_data=user_data@entry=0x555555e06040)
    at /usr/src/debug/x11-libs/gtk+-3.24.13/gtk+-3.24.13/modules/printbackends/cups/gtkprintbackendcups.c:3223
#3  0x00007ffff73caf79 in g_task_return_now (task=0x555555ea01a0 [GTask]) at ../glib-2.60.7/gio/gtask.c:1209
#4  0x00007ffff73cba9d in g_task_return (task=0x555555ea01a0 [GTask], type=<optimized out>) at ../glib-2.60.7/gio/gtask.c:1278
#5  0x00007ffff73cc00c in g_task_return (type=G_TASK_RETURN_SUCCESS, task=<optimized out>) at ../glib-2.60.7/gio/gtask.c:1678
#6  g_task_return_pointer (task=<optimized out>, result=<optimized out>, result_destroy=<optimized out>)
    at ../glib-2.60.7/gio/gtask.c:1683
#7  0x0000000000000000 in  ()

If you feel inclined, here is the full backtrace (bt full).

A lot more useful, huh?

Analyzing the crash

Knowing GDB is a powerful, useful skill. Nothing beats understanding your debugger. Not even printf debugging.

Let's start with the source of the crash:

(gdb) frame 0
#0  __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:120
120             movdqu  (%rax), %xmm4
(gdb) list
115     #ifdef AS_STRNLEN
116             andq    $-16, %rax
117             FIND_ZERO
118     #else
119             /* Test first 16 bytes unaligned.  */
120             movdqu  (%rax), %xmm4
121             PCMPEQ  %xmm0, %xmm4
122             pmovmskb        %xmm4, %edx
123             test    %edx, %edx
124             je      L(next48_bytes)
(gdb) info registers rax
rax            0x61                97

So strlen is trying to derefence address 0x61, that doesn't look right. Cheking the output of info proc mappings shows zathura doesn't have mapped memory that corresponds to the value in rax.

(gdb) info proc mappings
process 12314                                                                                                                  
Mapped address spaces:                                                                                                         

          Start Addr           End Addr       Size     Offset objfile                                                          
      0x555555554000     0x55555555f000     0xb000        0x0 /usr/bin/zathura                                                 
      0x55555555f000     0x555555581000    0x22000     0xb000 /usr/bin/zathura                                                 

…SNIP…

      0x7ffff7ffd000     0x7ffff7ffe000     0x1000    0x27000 /lib64/ld-2.29.so
      0x7ffff7ffe000     0x7ffff7fff000     0x1000        0x0 
      0x7ffffffdd000     0x7ffffffff000    0x22000        0x0 [stack]
  0xffffffffff600000 0xffffffffff601000     0x1000        0x0 [vsyscall]

Now let's carry on with the second frame.

(gdb) frame 1
#1  0x00007ffff721680d in g_strjoinv (separator=separator@entry=0x7fffec702546 "-", str_array=str_array@entry=0x555555e56ac0)
    at ../glib-2.60.7/glib/gstrfuncs.c:2585
2585          for (i = 1; str_array[i] != NULL; i++)
(gdb) info frame
Stack level 1, frame at 0x7fffffffd140:
 rip = 0x7ffff721680d in g_strjoinv (../glib-2.60.7/glib/gstrfuncs.c:2585); saved rip = 0x7fffec6fd31b
 called by frame at 0x7fffffffd200, caller of frame at 0x7fffffffd0f0
 source language c.
 Arglist at 0x7fffffffd0e8, args: separator=separator@entry=0x7fffec702546 "-", str_array=str_array@entry=0x555555e56ac0
 Locals at 0x7fffffffd0e8, Previous frame's sp is 0x7fffffffd140
 Saved registers:
  rbx at 0x7fffffffd108, rbp at 0x7fffffffd110, r12 at 0x7fffffffd118, r13 at 0x7fffffffd120, r14 at 0x7fffffffd128,
  r15 at 0x7fffffffd130, rip at 0x7fffffffd138
(gdb) list
2580          gsize separator_len;
2581
2582          separator_len = strlen (separator);
2583          /* First part, getting length */
2584          len = 1 + strlen (str_array[0]);
2585          for (i = 1; str_array[i] != NULL; i++)
2586            len += strlen (str_array[i]);
2587          len += separator_len * (i - 1);
2588
2589          /* Second part, building string */
(gdb) print *str_array
$1 = (gchar *) 0x555555cea670 "Canon"
(gdb) print str_array[1]
$2 = (gchar *) 0x555555dac870 "MF632C"
(gdb) print str_array[2]
$3 = (gchar *) 0x555555e87150 "634C"
(gdb) print str_array[3]
$4 = (gchar *) 0x61 <error: Cannot access memory at address 0x61>

Indeed, we cannot access memory of address 0x61. And looking at the source and documentation for gstrjoinv, the str_array argument should be a NUL terminated array of strings.

Let's look at the third frame.

(gdb) frame 2
#2  0x00007fffec6fd31b in avahi_service_resolver_cb (source_object=<optimized out>, res=<optimized out>, 
    user_data=user_data@entry=0x555555e06040)
    at /usr/src/debug/x11-libs/gtk+-3.24.13/gtk+-3.24.13/modules/printbackends/cups/gtkprintbackendcups.c:3223
3223                  data->printer_name = g_strjoinv ("-", printer_name_compressed_strv);
(gdb) info frame
Stack level 2, frame at 0x7fffffffd200:
 rip = 0x7fffec6fd31b in avahi_service_resolver_cb
    (/usr/src/debug/x11-libs/gtk+-3.24.13/gtk+-3.24.13/modules/printbackends/cups/gtkprintbackendcups.c:3223); 
    saved rip = 0x7ffff73caf79
 called by frame at 0x7fffffffd220, caller of frame at 0x7fffffffd140
 source language c.
 Arglist at 0x7fffffffd138, args: source_object=<optimized out>, res=<optimized out>, user_data=user_data@entry=0x555555e06040
 Locals at 0x7fffffffd138, Previous frame's sp is 0x7fffffffd200
 Saved registers:
  rbx at 0x7fffffffd1c8, rbp at 0x7fffffffd1d0, r12 at 0x7fffffffd1d8, r13 at 0x7fffffffd1e0, r14 at 0x7fffffffd1e8,
  r15 at 0x7fffffffd1f0, rip at 0x7fffffffd1f8
(gdb) list
3218                          printer_name_compressed_strv[j] = printer_name_strv[i];
3219                          j++;
3220                        }
3221                    }
3222
3223                  data->printer_name = g_strjoinv ("-", printer_name_compressed_strv);
3224
3225                  g_strfreev (printer_name_strv);
3226                  g_free (printer_name_compressed_strv);
3227                  g_free (printer_name);
(gdb) print printer_name_compressed_strv 
$5 = (gchar **) 0x555555e56ac0

Note the value of printer_name_compressed_strv of 0x555555e56ac0 corresponds to the value of str_array in the previous frame (g_strjoinv()). The full definition of avahi_sernvice_resolver_cb can be read on Gnome's GitLab.

As mentioned above, we found the sentinel value of the string array was not NUL. Looking at the following code, do you see the bug? I honestly didn't:

printer_name = g_strdup (name);
g_strcanon (printer_name, PRINTER_NAME_ALLOWED_CHARACTERS, '-');

printer_name_strv = g_strsplit_set (printer_name, "-", -1);
printer_name_compressed_strv = g_new0 (gchar *, g_strv_length (printer_name_strv));
for (i = 0, j = 0; printer_name_strv[i] != NULL; i++)
  {
    if (printer_name_strv[i][0] != '\0')
      {
        printer_name_compressed_strv[j] = printer_name_strv[i];
        j++;
      }
  }

data->printer_name = g_strjoinv ("-", printer_name_compressed_strv);

After spending some time refamiliarizing myself with glib, GTK+, and Googling, it became apparently after I found the commit that fixed it. Let me preface the found commit with a brief explaination.

  1. g_strcanon() replaces characters not in PRINTER_NAME_ALLOW_CHARACTERS with a hyphen, i.e. "Canon MF632C/634C" becomes "Canon-MF632C-634C"
  2. g_strsplit_set() splits printer_name on "-", giving the following array:

    (gdb) print *printer_name_strv@g_strv_length(printer_name_strv)+1
    $12 = {0x555555eff250 "Canon", 0x555555ece120 "MF632C", 0x555555f397e0 "634C", 0x0}
    
  3. g_new0() initializes a zero-filled array of pointers of length 3, the number of splitted elements returned by g_strsplit_set()
  4. The for loop copies over the contents of printernamestrv, but skips empty elements - e.g. in the case that the above string had two adjacent hypens.
  5. The g_strjoinv() joins each string in the printer_name_compressed_strv array of strings, joining them on "-".

The problem occurs because the call to g_new0 does not account for the extra array sentinel element. Indeed that is what the GitLab commit discusses.

Best way to fix it?

In this case, I did what is best for my time effort. GTK+ 3.24.14 has been out for a couple months, and GTK+ 3.24.13 is not much older. So instead of dealing with backports, that is making a patch for the older version of GTK+ and adding it to my install, I took the liberty to bump my local GTK+ 3 install to GTK+ 3.24.14.

Either is not too tricky, in all honesty, given adding patches to a Gentoo system is as easy as placing the patch in the correct path. And bumping the ebuild usually entails simply unmasking it via accepting the keyworded version (in my case ~amd64).

As such this is all I had to do to fix the issue:

snowcrash ~ # echo '~x11-libs/gtk+-3.24.14 ~amd64' >> /etc/portage/package.accept_keywords/gtk
snowcrash ~ # emerge -uDU -av --changed-deps --verbose-conflicts @world

These are the packages that would be merged, in order:



[ebuild     U ~] x11-libs/gtk+-3.24.14:3::gentoo [3.24.13:3::gentoo] USE="X cups examples introspection xinerama (-aqua) -broadway -cloudprint -colord -gtk-doc -test -vim-syntax -wayland" ABI_X86="(64) -32 (-x32)" 0 KiB

Total: 1 package (1 upgrade), Size of downloads: 0 KiB

Would you like to merge these packages? [Yes/No] 

And viola. I am able to print!

print-success.png
Figure 2: gtk print dialogue success!

Conclusion

In this post, I described several related challenges:

  1. How to get a backtrace
  2. What happens when seccomp blocks ptrace
  3. How to install debug symbols and source code on Gentoo
  4. What it looks like to pick apart a backtrace
  5. And how I fixed this particular issue

In retrospect, I should have reported the bug to the gentoo tracker, because this was a bug due to the selection of patches cherry picked off the gtk git repository. Thankfully the affected versions of GTK are no longer in the official gentoo ebuild repository. I'll be sure to report such bugs going forward! It pleases me how easy Gentoo makes it to debug stuff.

I found the entire experience here informative, but also incredibly irritating. I'm no stranger to debugging crashes and grabbing debug symbols. When something gets in the way of getting backtraces, things really get very frustrating. The debugging part is the fun part; dealing with wildcards like seccomp preventing ptrace() with no meaningful error messages is a huge time waster.

The lack of literature about debugging seccomp enabled applications was a factor in this frustration. I only figured out the issue due to taking the time to read the source code, grepping for ptrace, and understanding seccomp as its used in Zathura. Had I read the README I could have saved some time; It's important to read all the documentation.

If you made it this far, you have a lot of patience for ramblings and hobbyist computing. You're terrific! Next time you run into a segfault, put what you've learned to good use!

Tags: gentoo computing
18 Apr 2020

A week in the life of Winston

During these interesting times, I figure it would be a good idea to describe how I've been keeping myself busy, bugs I've fixed, and some of the daily tasks/routines that keep my day structured.

For context: I moved house on the weekend of March 21st, which is a couple weeks before the Covid-19 fiasco became a front-and-center concern for my geographical region. I am finishing my undergrad in computer science — this is my last semester. The classes I am taking are Compilers, Compiler Implementation Laboratory, and Matrices and Applications. I am now currently living in a very rural area, so I have been very successful in maintaining a social distance in all aspects of my life.

Daily routine

  1. Eat the exact same thing every day: Eggs, Bacon, Corn Tortillas. Take any dietary supplements/vitamins.
  2. Make coffee — Aeropressed. Currently sourcing coffee from Ruby. I wish to see more Ethiopia/Kenya/Peru coffee but even a more earthy Colombia coffee is agreeable.
  3. Spend 30–60 minutes checking email, news, IRC.
  4. Wash up
  5. Spend about 2-3 hours on current tasks — schoolwork, bugfixing, packaging, or researching.
  6. Eat a lunch, make some tea
  7. Spend another 2-3 hours on same tasks.
  8. Take a break, preferably away from computer
  9. Spend another couple hours
  10. Have a dinner
  11. Spend some time with housemates, make sure we're all on the same page
  12. Spend another couple hours on tasks
  13. Wash up, goto bed

In retrospect, I think I should replace one of those task blocks with off-task things such as gaming, reading books, and so on. That's way too much time on task, and knowing myself I end up being less productive due to too much time on task.

Package work for this week

For a long time now, I've aimed to keep all my system-wide software packaged in the OS package manager. This allows me to easily rebuild my systems, or deploy the OS on new computers. This also means updates become a lot easier, because the package manager can track things such as rebuilding all library dependencies, and ensuring dependencies are installed and are the correct versions. Depending on your OS it's pretty easy. In my case Gentoo makes it extremely easy.

Alephone

I reintroduced Alephone packages1 to play the classic Bungie first person shooters Marathon, Marathon 2: Durandal, and Marathon 3: Infinity. Thankfully I could base some of my work off my old Portage overlay combined with couple-year-old commits from the official Gentoo repository.

After getting a show-stopping bug addressed, I added a prerelease package that includes fixes that should address memory corruption, flickering sprites, and (some of the) popping audio. See details in the Bugs Addressed section.

The toolchain used for my university course

An ongoing desire was to do all my university homeworks locally, without logging into servers with less software choice, and an abnormal amount of network jitter/latency spikes. I finally made it happen with a combination of rsync invocations followed by a tar -czvf and a Gentoo package.

I find this very exciting. I invest very heavily into my computer environment, and try my best to avoid doing complicated work in unfamiliar environments. I can also do work offline now.

It is worth noting that the distfiles for this package are not publicly available, and as such you will have be a student in the course to install it. This is intentional. I have zero interest in trying to make this toolchain public or open source, I merely want to use it locally.

Bugs addressed

Deal with issue making Emacs unresponsive

I am pleased to discover and fix a longstanding bug that would yield my Emacs unresponsive after visiting files, then deleting the directories the visited files resided in. I wish I had documented the first time I noticed this problem; it may have been as soon as I introduced auto-virtualenvwrapper to my workflow. This package tells Emacs to automatically search for Python virtualenvs for use in ansi-term (terminal in emacs), running python code from emacs, and getting accurate tab completion when writing python.

This was one of those sort of irksome issues that is difficult to debug unless one invests effort to reconfigure emacs to report error traces, which can't be set after the bug occurs. As a result every time I've encountered this bug, I have given up, and simply restarted Emacs daemon, because messing up the minibuffer precludes issuing a M-x toggle-debug-on-error RET.

I want to thank the maintainer of auto-pythonvirtualenv for being very responsive to pull request I made. Contributing to Emacs packages is a lot of hit-or-miss, because it seems some of the less commonly used Emacs packages are dead. Additionally there is a culture of disinterest in accepting PRs that don't directly improve the maintainer's quality of life.

Strange Alephone memory corruption

I was very excited to get Alephone packaged and installed. I started noting weirdness on my workstation setup. It started with some graphics corruption, with sprites being rotated 90°, and severe visual corruption when interacting with the in-game text terminals. Invariably on every exit the game segfaults with corrupted size vs. prev_size.

I reported the issue, and after a lot of testing, it become apparent the issue only occurs when playing at my native resolution, which is 1440p (2560×1440), but does not happen at 1080p (1920×1080). Thanks to my (over) comprehensive testing and a couple passionate project maintainers, someone was able to pinpoint the source of the bug was due to an out of bounds write. The writes was caused by a statically allocated buffer used to copy artifacts of the render trees onto the screen. Or something like that.

I wrote a quick and dirty patch, then later one of the maintainers helped write a more future-proof patch. After testing it appears the problem is fixed. This was a fantastic experience, the discussion was on topic, there was no bike shedding, and everybody treated each other with kindness.

Dropping Nvidia

In 2015 I purchased a Nvidia GTX 760 used for $50. It was a great investment. At the time AMD driver quality is pretty poor. This is the post-fglrx horror years, but the drivers were still subpar compared to Nvidia's proprietary drivers. You could not expect to get Windows-par graphics performance on an AMD card in 2015. On the other hand one could expect Windows-par graphics performance on a Nvidia card.

Why AMD and not Nvidia

The landscape has completely changed in the last 5 years. AMD has open sourced their graphics drivers, and is actively helping out in maintaining them. Nvidia on the other hand has inherit issues such as

  • upgrading the driver breaks currently running Xorg sessions' 3d acceleration, and requires a reboot;
  • Out of tree kernel drivers are usually a bad idea, though I appreciate how easy Gentoo makes it to deal with them — simply run emerge @module-rebuild, this still a mild annoyance because it adds extra steps when upgrading kernels or rebuilding kernels
  • No native resolution modesetting is available on Nvidia, so your linux consoles (tt1-tty6 on most installs) are stuck at a very low resolution, and look very chunky;
  • you have to either use Nvidia libGL or use Mesa, not both (libglvnd fixes this apparently);
  • it's yet another non-free software to install on my computer — if bugs occur I cannot contribute fixes, or solicit fixes from other users
  • OBS acts up with Nvidia binary drivers, GZdoom skyboxes are not captured, and certain 3d applications are somewhat difficult to capture correctly with the binary drivers;
  • Nvidia's composition pipeline feature for reducing video tearing is pretty awful. It simply makes most animations look choppy/stuttery, and ruins the experience of most video playback;
  • Nvidia is liable to drop support for my card in another year or so, forcing me to upgrade anyways, this is planned obsolescence at the driver level. With AMD on the other hand the driver probably will stay in tree and supported for a couple decades;
  • There is no way to track resource usage of my GPU — it's too old to support tracking resource usage in nvidia-smi, but radeontop has been able to do this on all AMD cards for a very long time;
  • and there is a bug with Nvidia's HDMI alsa drivers that prevents pulseaudio from redetecting most of my sound interfaces on s3 resume from suspend. The usual work around is to either unplug my HDMI output or keep killing pulseaudio until it magically works.

With AMD on the other hand I don't foresee most of these issues. Presently I found s3 suspend-resume cycles take up to a minute, so I need to address that. Video tearing on the other hand is very minimal; I have been able to watch this YouTube tearing test and experience no video tearing. I did notice tearing in certain parts of Firewatch though. That is likely because Firewatch is not particularly well optimized.

Gotchas switching cards

The GPU arrived Thursday, and I got super excited, and neglected to run an emerge -uDU --changed-deps -av @word after an emerge --sync. The card installed fine, but X would segfault. I noticed in the Xorg logs it couldn't open the radeonsi driver. I thought I could simply add amdgpu to VIDEO_CARDS, but as the logs suggest, I need radeonsi and amdgpu. The Gentoo Wiki also suggests this. Because I was both trying to update and reconfigure my installation, this yielded to problems with blockers. It seemed nvidia was the problem, as it was masking Xorg versions I needed. I nuked nvidia from my VIDEO_CARDS and was successful in updating and reconfiguring my graphics stack.

Additionally, it appears the vulkan USE flag must be enabled on media-libs/mesa for some Steam games to work, such as The Talos Principle. I think the Nvidia binary drivers support vulkan out of the box, hence I never had to set a USE flag on the previous GPU driver.

Finally, I had to configure mpv to not use vdpau (I had forced mpv to use vdpau, for my Nvidia card). Otherwise mpv would give me a black screen.

Schoolwork

I found using a graphics tablet to be valuable to my math class. I can take notes in xournal, and write problems step by step. You might wonder what's wrong with paper, but it seems when in front of a computer watching lectures and interacting with online learning management systems, it is difficult to split attention between the computer and the notebook. As such I simply decided to digitize the notebook.

To make the experience more tolerable, I have been using youtube-dl to grab all the videos I can, and play them locally in MPV. This ensures I have global multimedia shortcuts to control video playback, have better control over frame advance, do not require internet access 24/7, and have better control over playback speed.

As I finally packaged the software used for one of my classes I can do all that class's work locally except for submission. This is fantastic because I can use my Emacs 26 setup and do not require a 24/7 Internet connection.

My office is located in a room that can get down to the low 60°'s at night, and I found many times I'd want to do work, I could barely focus because I was so unevenly cold. The floor would be 60-65 but the room would be 70. I feel like an old man complaining about this, but really getting a space heater did wonders for my productivity and focus. This is a schoolwork problem, because it's the most tedious sort of productivity.

Conclusion

This has been a rather long post. I really wanted to describe some of the things I've been up to, and some of the challenges I've been facing. I am very happy to have removed my workstation Nvidia dependency. I am very excited about graduating soon, and adjusting in this time, and keeping that in mind, has been a challenge. As usual packaging software and fixing keeps my computers usable and maintainable.

I hope to write more in the near future. I had started some posts on debugging a GTK bug, and some other topics, but the amount of material to cover kept growing, much like this post keeps growing. Stay tuned to read about seccomp madness.

Footnotes:

1

See my overlay on GitHub.

Tags: lifestyle gentoo computing
29 Mar 2020

Extending a wireless LAN with a bridged Ethernet LAN using Mikrotik RouterOS

I recently moved, and my new abode has an Ubiquiti Amplifi LAN. The rationale is this mesh-based WiFi network eliminates the need to install Ethernet between Wireless Access Points (APs). It works surprisingly well. In this post I document how I extended this network so I could place my networked devices all on the same Ethernet segment, without needing to wire it to the Amplifi base station.

The idea is the network should look like this:

wireless-network-topology.png

Multiple Subnets Gotcha: No static routes or additional IP addresses on the Amplifi Router

Initially I wanted to segment my devices on its own RFC1918 network. The idea is the main Amplifi LAN would be on 192.168.182.0/24 and my network would be on 10.9.8.0/24. I know I can ensure an address from each subnet is on each router, and the routers should then be able to route between each other without any additional configuration.

Unfortunately Amplifi does not offer a facility to add additional IP addresses to the LAN IP configuration. Think about how absurd that is—a router doesn't offer facility to add multiple addresses on an interface or bridge. Yep pretty silly. This leads me to the second approach: add static routes to make every segment on the network routable. Unfortunately, yet again, Ubiquiti Amplifi does not offer this standard router feature.

Given the limitations of Amplifi to not offer static routing or multiple LAN IP addresses, I think the only approach where I could maintain an additional subnet is with NAT (Network Address Translation) configured, which totally defeats the purpose of having a second subnet which could host any number of network services that should be reachable anywhere on the network, without any port forwarding.

Configuring the Mikrotik Routerboard

My examples assume the user is logged into the CLI via SSH. Make sure to read through the RouterOS Wiki documentation and upcoming replacement wiki when things are not clear.

In particular check out the guides for scripting, using the console, first-time setup, and troubleshooting tools.

The steps are this:

1) Make a backup of your current RouterOS configuration

About the /export and /import paths. Also worth reading about /system backup.

[admin@MikroTik] > # Export to local file
[admin@MikroTik] > /export verbose file=2020-03-29
[admin@MikroTik] > # Confirm file exists
[admin@MikroTik] > /file print
 # NAME                                                 TYPE                                                       SIZE CREATION-TIME       
 0 2020-03-29.rsc                                       script                                                  40.2KiB mar/29/2020 12:44:43
 1 flash                                                disk                                                            dec/31/1969 19:00:03
 2 flash/skins                                          directory                                                       dec/31/1969 19:00:03
 3 flash/pub                                            directory                                                       mar/20/2020 03:04:00

Then copy it over.

$ sftp admin@192.168.182.70
Connected to 192.168.182.70.
sftp> ls
2020-03-29.rsc   flash
sftp> get 2020-03-29.rsc
Fetching /2020-03-29.rsc to 2020-03-29.rsc
/2020-03-29.rsc                                                                                            100%   40KB   2.9MB/s   00:00    
sftp> exit

2) Reconfigure the bridge

You probably want everything on the bridge, including the ether1, which is often used for WAN on a typical router setup. More information on RouterOS bridges.

[admin@MikroTik] > /interface bridge port print 
Flags: X - disabled, I - inactive, D - dynamic, H - hw-offload 
 #     INTERFACE                              BRIDGE                              HW  PVID PRIORITY  PATH-COST INTERNAL-PATH-COST    HORIZON
 0   H ;;; defconf
       ether2                                 bridge                              yes    1     0x80         10                 10       none
 1 I H ;;; defconf
       ether3                                 bridge                              yes    1     0x80         10                 10       none
 2 I H ;;; defconf
       ether4                                 bridge                              yes    1     0x80         10                 10       none
 3 I H ;;; defconf
       ether5                                 bridge                              yes    1     0x80         10                 10       none
 4 I   ;;; defconf
       sfp1                                   bridge                              yes    1     0x80         10                 10       none
 5 I H ether1                                 bridge                              yes    1     0x80         10                 10       none
 6 I   wlan1                                  bridge                                     1     0x80         10                 10       none
 7     wlan2                                  bridge                                     1     0x80         10                 10       none

To add/remove devices just use /interface bridge port add and /interface bridge port remove. Press TAB to complete stuff.

3) Remove obsolete firewall rules

The following scripts can be copy-pasted into the terminal. More information on RouterOS scripting here.

I've included the following image because I think everyone should be able to enjoy the colorful nature of RouterOS's shell.

firewall-remove-script.png

/ip firewall filter

As per a suggestion on IRC (thanks drmessano) it's best to drop all the firewall rules, since this device should be a L3 bridge, and should not be restricting traffic.

:foreach rule in=[/ip firewall filter find] do={ \
  :if ([/ip firewall filter get $rule dynamic])  \
    do={}                                        \
    else={/ip firewall filter remove $rule}      \
};                                               \
/ip firewall filter print

/ip firewall mangle

I didn't see a need for any mangle rules, so this should be empty as well.

:foreach rule in=[/ip firewall mangle find] do={ \
  :if ([/ip firewall mangle get $rule dynamic])  \
    do={}                                        \
    else={/ip firewall mangle remove $rule}      \
};                                               \
/ip firewall mangle print

/ip firewall nat

Everything here should be disabled or removed. This setup does not use a NAT.

:foreach rule in=[/ip firewall nat find] do={ \
  :if ([/ip firewall nat get $rule dynamic])  \
    do={}                                     \
    else={/ip firewall nat remove $rule}      \
};                                            \
/ip firewall nat print

4) Disable DHCP Server

You can probably just run:

/ip dhcp-server disable 0

This is necessary because this device shouldn't be doling out IP addresses. Only one router on this subnet should be running a DHCP server.

5) Configure device IP Address

Add an out-of-band management IP address

In general it's a good idea to add out-of-band management IP addresses to devices that don't have another way to log in. My particular device does not have an accessible serial console, so I need to take care to always have a way to address this Routerboard, even if the LAN and its DHCP server goes down.

There are two ways to achieve this: use an IPv6 link-local address or manually add a static RFC1918 IPv4 address. I will use a static IPv4 address.

/ip address add address=10.128.0.1/24 interface=bridge

Make sure to write this address down, it will save a hard-reset down the road. Maybe attach it to the unit with a printed label.

With my Linux box's Ethernet directly hooked up to the Routerboard, I can assign another IPv4 on the same subnet, then log into the router.

winston@snowcrash ~ $ sudo ip address add dev enp2s0 10.128.0.10/24
winston@snowcrash ~ $ ssh admin@10.128.0.1
The authenticity of host '10.128.0.1 (10.128.0.1)' can't be established.
RSA key fingerprint is SHA256:QhJryzCxFpT/wW4Mmg7R6QEnRDPeYsY2SAF/hlc7Mx4.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.128.0.1' (RSA) to the list of known hosts.








  MMM      MMM       KKK                          TTTTTTTTTTT      KKK
  MMMM    MMMM       KKK                          TTTTTTTTTTT      KKK
  MMM MMMM MMM  III  KKK  KKK  RRRRRR     OOOOOO      TTT     III  KKK  KKK
  MMM  MM  MMM  III  KKKKK     RRR  RRR  OOO  OOO     TTT     III  KKKKK
  MMM      MMM  III  KKK KKK   RRRRRR    OOO  OOO     TTT     III  KKK KKK
  MMM      MMM  III  KKK  KKK  RRR  RRR   OOOOOO      TTT     III  KKK  KKK

  MikroTik RouterOS 6.46.4 (c) 1999-2020       http://www.mikrotik.com/

[?]             Gives the list of available commands
command [?]     Gives help on the command and list of arguments

[Tab]           Completes the command/word. If the input is ambiguous,
                a second [Tab] gives possible options

/               Move up to base level
..              Move up one level
/command        Use command at the base level

[admin@MikroTik] > 

Provision an IPv4 on the LAN for easier management

I opted to use DHCP, but I'll show both ways to assign an IP address to the router. Before adding more DHCP clients or a static IP, make sure to disable or remove other DHCP clients.

# Change "remove" to "disable" to keep the configuration available for later use.
foreach cl in=[/ip dhcp-client find] do={/ip dhcp-client remove $cl}; \
/ip dhcp-client print

To add a DHCP client, run a command like this:

/ip dhcp-client add interface=bridge disabled=no

To add a static IPv4 address, run commands like the following:

# Give the Routerboard an address
/ip address add address=192.168.182.70/24 interface=bridge

# The following two commands tell the Routerboard how to access the internet, so it
# can get updates or access cloud services.
#
# Where 192.168.182.1 is the default gateway (the main router)
/ip route add gateway=192.168.182.1
# Set the DNS servers preferring in order: the main router, Cloudflare, and Google
/ip dns set servers=192.168.182.1,1.1.1.1,8.8.8.8

6) Configure the WiFi

Here is my WiFi configuration:

[admin@MikroTik] > /interface wireless print 
Flags: X - disabled, R - running 
 0 X  name="wlan1" mtu=1500 l2mtu=1600 mac-address=CC:2D:E0:E1:3E:B9 arp=enabled interface-type=Atheros AR9300 mode=station-pseudobridge 
      ssid="MyCoolWifiName" frequency=auto band=2ghz-b/g/n channel-width=20/40mhz-Ce secondary-channel="" scan-list=default 
      wireless-protocol=802.11 vlan-mode=no-tag vlan-id=1 wds-mode=disabled wds-default-bridge=none wds-ignore-ssid=no bridge-mode=enabled 
      default-authentication=yes default-forwarding=yes default-ap-tx-limit=0 default-client-tx-limit=0 hide-ssid=no 
      security-profile=default compression=no 

 1  R name="wlan2" mtu=1500 l2mtu=1600 mac-address=CC:2D:E0:E1:3E:B8 arp=enabled interface-type=Atheros AR9888 mode=station-pseudobridge 
      ssid="MyCoolWifiName" frequency=auto band=5ghz-a/n/ac channel-width=20/40/80mhz-Ceee secondary-channel="" scan-list=default 
      wireless-protocol=802.11 vlan-mode=no-tag vlan-id=1 wds-mode=disabled wds-default-bridge=none wds-ignore-ssid=no bridge-mode=enabled 
      default-authentication=yes default-forwarding=yes default-ap-tx-limit=0 default-client-tx-limit=0 hide-ssid=no 
      security-profile=default compression=no 

1) Take the WiFi offline

First, I recommend taking the WiFi offline until you're happy with the configuration.

/interface wireless disable numbers=0,1

2) Configure the SSID and put the Routerboard into a wireless client mode

Then make sure to set the ssid to your existing WiFi's ESSID and mode to station-pseudobridge.

/interface wireless set numbers=0,1 mode=station-pseudobridge ssid="MyCoolWifiName"

3) Configure the security-profile

Chances are your wireless device already has a security profile, so make note of the security-profile in the /interface wireless print output. It is likely security-profile=default.

Next configure the security-profile, mine looks like this:

[admin@MikroTik] > /interface wireless security-profiles print                  
Flags: * - default 
 0 * name="default" mode=dynamic-keys authentication-types=wpa-psk,wpa2-psk unicast-ciphers=aes-ccm group-ciphers=aes-ccm 
     wpa-pre-shared-key="Top secret password here" wpa2-pre-shared-key="Top secret password here" supplicant-identity="MikroTik" eap-methods=passthrough 
     tls-mode=no-certificates tls-certificate=none mschapv2-username="" mschapv2-password="" disable-pmkid=no static-algo-0=none 
     static-key-0="" static-algo-1=none static-key-1="" static-algo-2=none static-key-2="" static-algo-3=none static-key-3="" 
     static-transmit-key=key-0 static-sta-private-algo=none static-sta-private-key="" radius-mac-authentication=no 
     radius-mac-accounting=no radius-eap-accounting=no interim-update=0s radius-mac-format=XX:XX:XX:XX:XX:XX radius-mac-mode=as-username 
     radius-called-format=mac:ssid radius-mac-caching=disabled group-key-update=5m management-protection=disabled 
     management-protection-key="" 

In short, run the following command:

:local password "Top secret password here"; \
/interface wireless security-profiles set numbers=0 \
  wpa-pre-shared-key=$password wpa2-pre-shared-key=$password

4) Re-enable the WiFi and test

I know the Amplifi wireless LAN supports 802.11ac and the closest mesh-point is sufficiently close, so I'll only enable the radio capable of 802.11ac. Looking at the output of /interface wireless print I see only wlan2's band key contains support for 802.11ac (band=5ghz-a/n/ac). So I'll only enable that radio.

/interface wireless enable wlan2

In any case once everything is set up, make sure one can ping the internet:

admin@MikroTik] > /ping google.com count=3
  SEQ HOST                                     SIZE TTL TIME  STATUS                                                                        
    0 172.217.9.46                               56  52 25ms 
    1 172.217.9.46                               56  52 24ms 
    2 172.217.9.46                               56  52 25ms 
    sent=3 received=3 packet-loss=0% min-rtt=24ms avg-rtt=24ms max-rtt=25ms 

Some gotchas in case things don't work:

  1. Can you ping the router via its shared LAN IP
  2. What parts of the network can be pinged from what devices?
  3. Is the WiFi SSID/Password correct?

Conclusion

Though a casual reader might consider this too many steps to configure a network device, it is only a handful of operations. A Router/Switch device's web configuration GUI might help streamline this, but that doesn't mean it is any simpler to configure. Chances are one will have to fill in more boxes and tick more fields in such a scenario, since this is a somewhat weird and awkward network topology.

As usual the Mikrotik Wiki helped me get this project finished in little time. Doubly so thanks to the Mikrotik IRC channel. I rather enjoy working with RouterOS because it feels rather state-less. When I specify a configuration, it is usually idempotent, every line of configuration feels relevant to the use-case, and in general feels rather DWIM (Do what I Mean).

I hope this helps somebody, because this took some mild mental aerobics to figure out how this works. In particular I knew that WiFi and Ethernet have different frame formats, and Layer 2 bridging between the two requires weird hacks that are unique to the vendor's software/hardware. In this case using station-pseudobridge, the Routerboard does Layer 2 bridging for certain traffic, and falls back on Layer 3 for the rest. I don't fully understand it, but the results are satisfactory.

Edit: About the IRC channel

I'm a big fan of IRC however looking over the logs for ##mikrotik, and my interactions, I was very lucky to never be a target of abuse. Thankfully all I had to deal with are some low intensity rudeness, and the repercussions of reminding the channel maybe it's a good idea to act like adults (and that people remember interactions like this). Indeed that is apparently a dangerous discussion to bring up, I guess rude people don't like being told they are being rude. I left that channel, and I think others should avoid it.

Anyway I looked through my year's worth of logs, and found these comments about the abuse problem in the channel. I have anonymized the names because even trolls/rude users don't deserve personal attacks. What isn't included are actual insults and attacks on other users. Those are far too personal and explicit for my blog.

2019-12-19 16:39:03     PersonA       "##mikrotik You've got questions, we've got toxic mockery."
2019-12-19 16:42:14     PersonB       come for the advice, stay for the abuse

2019-12-20 09:20:23     PersonC PersonB: it worked, tyvm
2019-12-20 09:20:24     <--     PersonC (~PersonC@unaffiliated/PersonC) has quit
2019-12-20 09:20:45     PersonD        wat
2019-12-20 09:21:16     PersonB       left without giving me a chance to insult him
2019-12-20 09:21:21     PersonD        damn :(
2019-12-20 09:21:55     PersonD        #mikrotik: Come for the help, stay for the abuse
2019-12-20 09:22:13     PersonB       i did that one already
2019-12-20 09:22:22     PersonB       [22:42:14] <PersonB> come for the advice, stay for the abuse

2019-12-28 15:21:22     PersonE       have you come for the abuse?
2019-12-28 15:30:43     PersonF        always
2019-12-28 16:03:58     PersonB       the abuse is the only way he can come

2020-01-10 16:21:21     PersonG  well you guys led the conversation in that direction
2020-01-10 16:21:27     PersonG  start calling names
2020-01-10 16:21:32     PersonG  I don't work networking
2020-01-10 16:21:41     PersonG  I am messing at home
...
2020-01-10 16:23:24     PersonG  nah, it is excuse to be rude and without any manners.

Indeed they seem to know how inappropriately they service the community the channel is for. As always you can write me about this at the following email: hello AT winny DOT tech. I recommend seeking out another communication medium for questions related to Mikrotik.

Tags: computing networking mikrotik
07 Jan 2020

Switching website to GitLab Pages

Previously I detailed how I set up blog.winny.tech using GitHub for source code hosting and Caddy’s git plugin for deployment. This works well and I used a similar setup with my homepage. The downside is I host the static web content and I am tied to using Caddy.1 I imagine simpler is better, so I opted to host my static sites — https://winny.tech/ & https://blog.winny.tech/ — with GitLab pages.

What’s wrong with Caddy?

Caddy is very easy to get started with, but it has its own set of trade-offs. Over the last few years, I’ve noticed multiple hard-to-isolate performance quirks, some of which were likely related to the official Docker image. In particular, I had built a Docker image of Caddy with webdav support, and the overall performance tanked to seconds per request, even with webdav disabled. I still have no clue what happened there; instrumenting Caddy through Docker appeared nontrivial, so I gave up on webdav support, reverted to my old Docker based setup, and everything was fast, once again.

There is a good amount of inflexibility in Caddy, such as the git plugin’s limitation to deploy to a non-root folder of the web root. And its rewrite logic is usually what you want, but not nearly as flexible as nginx’s.

Asking questions on their IRC is usually met with no response of any kind, which indicates to me that the project’s community isn’t very active.

The move to Caddy v2 is unwelcoming; I don’t want to relearn yet another set of config files and quirks, especially weeding through the layer of configuration file format adapters and the abstracted-away configuration options, so I rather just use Certbot and some other HTTPD that won’t change everything for the fun of it.2

Until recently Caddy experimented with a pretty dubious monetizing strategy. HackerNoon published an article detailing how it worked. In short: they plastered text all over their website claiming you “need to buy a license” to use Caddy commercially, though that claim was never true. Caddy was always covered by Apache License 2.0. Instead, you needed a commercial license in the narrow use-case that your organization wants to use Caddy’s prebuilt release binaries as offered on their website. It is good they stopped this scheme, but it leaves a bad taste with the community, and with me, and discourages me from relying on the project moving forward.

Why GitLab Pages instead of GitHub Pages?

I have used both GitHub Pages and GitLab Pages in the past. My experience with GitHub Pages is it’s relatively inflexible and difficult to see what is going to be published, and has a CI/CD setup only useful for certain Jekyll based sites. GitLab pages, on the other hand, lets you set up any old Docker-based CI/CD workflow, so it is possible to render a blog with GitLab CI of any static site generating software. The IEEE-CS student chapter I am a part of does just this. We use a combination of static redirect sites and a Pelican-powered static website. There are a large number of example repositories for most of the popular ways to publish a static website, including Gatsby, Hugo, and Sphinx. Needless to say GitLab Pages puts GitHub pages to shame in terms of flexibility.

Setting up GitLab Pages

There are two steps in setting up GitLab Pages. These are the most important ideas related to GitLab pages; how to navigate the site is something the reader must experience for oneself. Nothing beats experimentation and reading the docs. Make sure to refer to the official GitLab Pages documentation for further details.

1) Getting GitLab Pages deploy your git repository

Before getting started, make sure GitLab Pages is activated for your project. Visit it via Settings → Pages on your project. Most of the Pages settings are rooted in that webpage.

How GitLab Pages CI/CD deploys your site is specific to your software or lack of software. If you are simply setting up a static website on GitLab Pages, a simple .gitlab-ci.yml will work for you:

pages:
  stage: deploy
  script:
  - mkdir .public
  - cp -rv -- * .public/  # Note the `--'
  - mv .public public
  artifacts:
    paths:
    - public
  only:
  - master

This simply tells GitLab CI/CD to copy everything not starting with a . into the public folder. By the way, one cannot change the public folder path. It does not appear possible to use something like artifacts: paths: ["."] to deploy the entire git repository.

There is a GitLab CI/CD YAML lint website3 (and web API). Additionally, there is a reference documentation for the .gitlab-ci.yml schema. Please note, it will often yield confusing error messages. For example it is invalid to omit a script key, but the error message is Error: root config contains unknown keys: pages. Take the error messages with a grain of salt.

Once you have what seems like the .gitlab-ci.yml that you want, commit it to your git repository, and push to GitLab. Check progress under CI/CD → Pipelines. If everything works out, you should be able to view the website on GitLab Page’s website — e.g. https://winny.tech.gitlab.io/blog.winny.tech. The format of the above url (visible in Settings → Pages) is https://<namespace>.gitlab.io/<project>. If you can’t view your website, check the CI/CD pipeline’s logs, and inspect the artifacts ZIP — which is also available from the CI/CD piplines page. Chances are you need to edit the .gitlab-ci.yml or tweak the scripts used in the YAML file.

2) Hosting the GitLab Pages site on your (sub-)domain

All the tasks in this section use Settings → Pages using the “New Domain” or “Edit” webpages.

To set up GitLab pages on your domain, you need to first prove ownership of that specific domain via a specially constructed TXT record, then configure that specific domain to point to GitLab Pages via a CNAME or A record. In general I recommend using an A record because you can stuff any other records you please on the same domain.

Simply add an A record on your DNS setup as so: yourdomain.com. A 35.185.44.232.4 If everything works, after the DNS updates it can take anywhere from seconds to the rest of your SOA TTL (Time-To-Live). Visiting your domain should now provide a GitLab Pages placeholder page with a 4xx error code.

Next prove to GitLab you own the domain. Create the TXT record as indicated in the GitLab Pages management website. The string to the left of TXT should be the name/subdomain, and the string to the right of TXT is the value. Alternately you can put the entire string into the value field of a TXT record (?!).

Note, the above two sub-steps are independent; one can validate the domain before adding the record to point it to GitLab, and vice versa.

GitLab Pages Gotchas

There are a few gotchas about GitLab Pages. Some of them are related to GitLab Pages users not being familiar with all of the DNS RFCs. Others are simply because GitLab Pages has quirks too.

CNAME on apex domain is a no-no

Make sure you do not use a CNAME record on the apex domain. Use an A record instead. Paraphrasing from the ServerFault answer: RFC 2181 clarifies a CNAME record can only coexist with records of types SIG, NXT, and KEY RR. Every apex domain must contain NS and SOA records, hence a CNAME on the apex domain will break things.

CNAME and TXT cannot co-exist

The above also is true for TXT and CNAME on the same subdomain. For example if one adds TXT somevaluehere and CNAME example.com to the same domain, say hello.example.com, things will not behave correctly.

If we have a look at the GitLab Pages admin page, the language is mildly confusing, stating “To verify ownership of your domain, add the above key to a TXT record within to your DNS configuration.” At first, I thought “somewhere in your configuration” means “place this entire string as the right hand side of a TXT record on any subdomain in your configuration”. This does work, as such I have

blog.winny.tech. IN A   35.185.44.232
blog.winny.tech. IN TXT "_gitlab-pages-verification-code.blog.winny.tech TXT gitlab-pages-verification-code=99da5843ab3eabe1288b3f8b3c3d8872"

But they probably didn’t mean that, Surely I should have this instead:

blog.winny.tech IN A   35.185.44.232
_gitlab-pages-verification-code.blog.winny.tech IN TXT gitlab-pages-verification-code=99da5843ab3eabe1288b3f8b3c3d8872

I feel a bit silly after realizing this is what the GitLab Pages folks intended for me to do, but it really was not clear to me, especially given how when clicking in the TXT record’s text-box it highlights the entire string, instead of allowing the user to copy the important bits (such as the TXT’s key) into whatever web management UI they might be using for DNS.

The feedback loop for activation of the domain is slow

It can take awhile for a domain to be activated by GitLab Pages after the initial deploy. Things to look for: you should get a GitLab Pages error page on your domain if you set up the CNAME or A record correctly. The error is usually “Unauthorized (401)”, but it can be other errors.

The other place to look is verify your domain is in the “Verified” state on the GitLab Pages admin website.

The feedback loop for activation of LetsEncrypt HTTPS is huge

Sometimes GitLab pages will seemingly never activate your LetsEncrypt support for HTTPS access. If this happens, a discussion suggests the best solution is to remove that domain from your GitLab Pages setup, and add it again. You will likely have to edit the TXT record used to claim domain ownership. This also worked for me, when experiencing the same issue.

Make sure to enable GitLab Pages for all users

Conclusion

GitLab pages isn’t perfect, but this should streamline what services my VPS hosts, and give me more freedom to fiddle with my VPS configuration and deployment. I look forward to rebuilding my VPS with cdist, ansible, or saltstack. While that happens, my website will be up thanks to GitLab pages. Also, I imagine GitLab Pages is a bit more resilient to downtime than a budget VPS provider.

The repositories with .gitlab-ci.yml files for both this site, and winny.tech are public on GitLab official hosting. Presently it is the simplest setup possible, simply deploying pre-generated content already checked into git, but the possibilities are endless.

Footnotes:

1

I could deploy my own webhook application server that GitHub/GitLab connects to, and have done so in the past, but every application I manage is another thing I have to well, ahem, manage (and fix bugs for).

2

There are some cool new features in Caddy 2, such as the ability to configure Caddy via a RESTful API and a sub-command driven CLI, but I don’t need additional features.

3

From the GitLab CI Linter’s old page “go to ‘CI/CD → Pipelines’ inside your project, and click on the ‘CI Lint’ button”. Or simply visit https://gitlab.com/username/project/-/ci/lint.

4

It’s a good idea to compare the mentioned IP address against what appears in the GitLab Pages Custom Domain management interface.

Tags: computing operations
Other posts

© Winston Weinert (winny) — CC-BY-SA-4.0