• Category Archives Technology
  • Upgrading Urutu – my home desktop

    After 8 years of service, Urutu got an upgrade of its core components. While the old hardware was still going strong, and capable of handling most tasks, a KDEnlive render task projected to take 23 hours finally convinced me that it was time to get some new parts.

    Urutu is on PCPartPicker:

    Main upgrades

    ItemOldNewPrice
    CPUi6700kRyzen 9900X¥69,980
    MotherboardAsus Hero Maximum VIIIASRock Phantom Gaming X870E Nova WiFi¥50,373
    RAMG.Skill Ripjaws 32 GB CL14G.Skill Trident Z5 Neo RGB 64 GB CL28¥48,566

    Even without converting to 2019-money, this upgrade was significantly cheaper than the last one, coming in at well under £1000 for near-top-of-the-range components.

    I am still waiting on a “new” GPU as well, a friend has an old GTX3090 which will be a nice upgrade from the 1080, although nowhere near as top-line as the rest of the components.

    What worked well

    Overall it took some time, mostly due to the custom water loop, but the upgrade went pretty smooth.

    • all the old components and the case still work perfectly, although I did get new 12cm case fans as well
    • making sure Windows was up-to-date and cloud-activated ensured it continued working after the upgrade
    • the new PC is blisteringly fast, near the top of most of the benchmarks in “HardwareInfo2” on Linux. My CPU benchmarks beat other Ryzen 9900X entries for some reason.

    Issues

    I encountered a couple of very minor issues during the build.

    Water loop

    Most of the work was taking apart the custom water loop and rebuilding it, but this was also a good opportunity to replace all of the tubing as it had yellowed quite badly in the meantime. I made a mistake and fitted the CPU waterblock the wrong way up due to cable routing. What I thought was just cosmetic turned out to be practical as an air bubble gathered at the top of the block.

    Cabling

    There’s some minor cabling issues due to the different board layout, mostly with fan cables not being long enough, so extension cables were needed. This time I also went for more (A)RGB components and these also required some extensions and splitters. In retrospect I made a mistake with the new case fans which should have been A-RGB instead of plain RGB.

    As a result, the rear of the machine is a lot messier than the previous build.

    OS Issues – Windows 10

    I should have removed all the Asus-specific software prior to changing the hardware as some of the (un)installers refuse to work now that they don’t detect the Asus motherboard anymore. It’s left a bit of a mess so short of reinstalling Windows (not really an option) I’ve been fighting to clean up the Asus software.

    OS Issues – Debian GNU/Linux 13

    For some reason when I upgraded to Debian 13 recently it left me on an older kernel, which did not have drivers for the new Realtek network card. After a bit of faffing about I managed to get a driver for it, got networking up and running, and then upgraded to a current kernel and no issues.

    Apart from that I had to remove a couple of modules which were specific to the old motherboard.

    Final Thoughts

    So far very happy with the upgrade, looking forward to fixing the CPU waterblock and getting new benchmarks once I get the “new” GPU.


  • Mocking system functions in C++ with GoogleMock

    Once you know how, it’s pretty easy to write unit tests and mock system functions in C++ using GoogleTest.

    But getting all the information together in one place is a bit tricky and there are a couple of gotchas. This article gives a brief overview.

    System Functions

    So what are system functions anyway? These are the APIs which your operating system provides, such as open(), read(), close(), etc. Most non-trivial code bases will call various system functions in order to implement its functionality.

    Intercepting system functions

    There are multiple approaches to testing code which calls system functions:

    1. Place all system calls into wrapper classes. You now have a standard C++ class which you can mock in the normal way. The con is that you have to rewrite the code to use the wrapper class instead of calling the system functions directly.
    2. Use precompiler tricks to overwrite the calls to the system functions. This is only possible if you can inject the precompiler definitions into the code to be tested and recompile.
    3. Use the linker --wrap= option to wrap the functions you’re interested in. This is the approach we’ll use in this article as it is the least intrusive to the codebase and can even be used on precompiled object files.

    Wrapping function calls

    In order to wrap a function, linkers provide the --wrap=<function> command line parameter. This is a quite common approach when coding in C and wanting to trace or instrument system function calls.

    When specified, each call to function is replaced with a call to __wrap_function instead. The real function is still available via __real_function. It is common to call the real function from within the wrap function as well as adding some additional functionality.

    For example:

    #include <dirent.h>
    
    int __wrap_stat(const char *filename, struct stat *stat)
    {
      printf("stat(%s, %p) called\n", filename, stat);
      return __real_stat(filename, stat);
    }

    C++ Gotcha! Name Mangling

    In C++ there’s a gotcha with this approach – name mangling! All C++ symbols get name-mangled to ensure that they don’t conflict. But when using the linker “wrap” feature, the names must be exactly as above. Luckily C++ provides the extern "C" mechanism to turn name mangling off for sections of code.

    So in C++ we can do the following:

    #include <dirent.h>
    #include "mocks/dir_mock.hpp"
    
    extern "C" {
    
    int __wrap_stat(const char *filename, struct stat *stat)
    {
      printf("stat(%s, %p) called\n", filename, stat);
      // Invoke the mock if it is active
      if (FileMock::instance != nullptr) {
        return FileMock::instance->stat(filename, stat);
      }
      // Otherwise call the real implementation
      return __real_stat(filename, stat);
    }
    
    } // extern "C"

    Linker wrap flags using CMake

    CMake is a popular meta-build system and as of CMake v3.13, it is fairly easy to provide linker flags to the build process using the target_link_options() setting.

    cmake_minimum_version(3.13)
    
    add_executable(my_test ...)
    target_link_options(my_test
      PRIVATE
        LINKER:--wrap=stat
        LINKER:--wrap=...
    )

    Writing the mock

    With the wrapped functions above and the build system configured correctly, it is now pretty easy to create a mock class for use in the unit tests:

    class FileMock {
      static *FileMock instance;
      DirMock() { instance = this; }
      virtual ~FileMock() { instance = nullptr; }
      MOCK_METHOD(int, stat, (const char *filename, struct stat *stat));
    }

    Add more mock methods as required, making sure to provide the wrapped functions as well.

    NOTE: The static instance member must be declared in an implementation file somewhere; the file containing the wrap functions is a good candidate.

    Writing the tests

    We can now write unit tests which use the mock class instead of running against the real system.

    TEST(StatTests, test_stat_real_directory) {
      struct stat s{};
      ASSERT_EQ(0, stat("/tmp", &s));
      ASSERT_TRUE(S_ISDIR(s.st_mode));
      ASSERT_NE(0, stat("/invalid_directory", &s));
    }
    
    TEST(StatTests, test_stat_mock_directory) {
      FileMock fm;
      EXPECT_CALL(fm, stat(::testing::StrEq("valid_dir"), ::testing::_)
        .WillOnce([](const char* path, struct stat* s) {
          if (s) {
            memset(stat, 0x00, sizeof(*s));
            s->st_mode |= S_IFDIR;
          }
          return 0;
        });
      EXPECT_CALL(fm, stat(::testing::StrEq("invalid_dir"), ::testing::_)
        .WillOnce([](const char* path, struct stat* s) {
          errno = ENOENT;
          return 1;
        });
    
      struct stat s{};
      int rv = stat("valid_dir, &s);
      ASSERT_EQ(rv, 0);
      ASSERT_TRUE(S_ISDIR(s.st_mode));
    
      rv = stat("invalid_dir, &s);
      ASSERT_NE(rv, 0);
      ASSERT_EQ(errno, ENOENT)
    }

    As the example shows, it is trivially easy to select between invoking the real implementation versus the mock implementation simply by constructing, or not, an instance of the mock class in the unit test.

    C++ Gotcha! Mock Parameter Validation

    In the example above we see (or rather, don’t see, as it’s already been compensated for) another C++ Gotcha: parameter validation in EXPECT_CALL mock function calls.

    It is essential that the correct Matcher is used to validate parameters. In the case above, if the StrEq() Matcher wasn’t being used, C++ wouldn’t compare the strings, as one might expect, but it would compare the pointers! Even though the pointers point to the same string value they are different and hence the mock call would fail.

    Wrapping up

    So in summary, to unit test your code with mocked system calls:

    1. Wrap the system call(s) which your code invokes
    2. Write a mock class containing mock methods for each system call
    3. Implement the wrapped system call(s) to call the mock method
    4. Win!

    Example

    The inspiration for this article was the research I did to write a small Directory Iterator class. The full example with unit tests is available here.


  • Upgrading “boa” from Debian 11 to Debian 12

    After being nagged for months that Debian 11 was no longer supported, I finally bit the bullet and used a long weekend to start upgrading my cloud server.

    Overall it went smoothly, as expected with Debian.There was the usual merging-in of various configuration files with the updated package-maintainers versions compared to my own modified versions, but apart from taking time there were no major surprises.

    I did have two issues though:

    1. Owncloud
    2. LDAP

    Owncloud

    This one was mostly my own fault.

    I performed the system upgrade first, which upgraded PHP from 7.4 to 8.2. This resulted in the ancient version of Owncloud that I was running in no longer running, as it did not support PHP 8.2. I should have upgraded Owncloud before upgrading the server.. Something to remember for next time!

    A fair amount of muddling about and I managed to install PHP7.4 and the various dependencies which Owncloud needs from a third-party repository in order to finally manage to run the manual upgrade process.

    Still couldn’t log in though as LDAP was broken..

    LDAP

    Debian 12 upgraded OpenLdap to version 2.5 from 2.4. This included some breaking changes, which required manually editing the backed-up LDAP data before being able to import it again.

    Various websites gave some information, but in the end, the README which was installed on the server itself ended up being the best resource, and I was finally able to re-import my LDAP data.


  • Migrate partitions to LVM on a live server

    Background

    My server was provisioned by Contabo as a Debian 9 server with a traditional MBR partition layout. At some point I did manage to at least split /var and /home off from the root partition, leaving the following layout for many years:

    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda      8:0    0   1.2T  0 disk 
    ├─sda1   8:1    0   237M  0 part /boot
    ├─sda2   8:2    0    20G  0 part /
    ├─sda3   8:3    0    20G  0 part /var
    ├─sda4   8:4    0     1K  0 part 
    └─sda5   8:5    0 259.8G  0 part /home

    Recently I upgraded the VPS to have more diskspace, as I was starting to run low and, instead of mucking about with an ever-increasing amount of relatively inflexible extended partitions and symlinks, I decided to figure out a way to convert this layout to LVM which will provide me with flexibility to mange the disk space in the future.

    After a bit of research and prototyping all the steps in a local VM, I came up with the following procedure which worked for me.

    Note that I did NOT convert the /boot partition and that the disk remains an MBR-partitioned disk.

    Before Starting

    It is strongly recommended to take a backup and/or snapshot of the server before commencing. A singly mistype or issue during the conversion could lead to full data loss.

    Initial Conversion

    The first step is to convert sda2 and sda3 to use the Logical Volume Manager (LVM) and move the root and var partitions into that. This requires a multi-step process:

    1. Create a temporary LVM Physical Volume (PV) and move the data into it
    2. Update the system configuration
    3. Reboot the system
    4. Remove the original partitions and replace them with a new PV
    5. Add the PV to the LVM Volume Group (VG)
    6. Remove the temporary PV from the VG

    To create the temporary PV, I first needed to increase the size of the extended partition to make use of the new disk space. I used cfdisk for this. Note that there seems to be a bug in the Debian 11 version of cfdisk – when first increasing the size of the extended partition, a message is shown stating the maximum size, but nothing happens. Deleting the size and pressing enter again applies the size last specified.

    Next is creating the LVM:

    pvcreate /dev/sda6
    vgcreate vg1 /dev/sda6
    lvcreate -n root -L 8G vg1 && mkfs.ext4 /dev/mapper/vg1-root
    lvcreate -n var -L 20G vg1 && mkfs.ext4 /dev/mapper/vg1-var

    Now we can copy the data. Note that when copying /var, it is useful to shut down as many services on the server as possible to reduce the data being actively written to the partition. It may be a good idea to run the sync again just prior reboot.

    mount /dev/mapper/vg1-root /mnt
    rsync -avxq / /mnt/
    mount /dev/mapper/vg1-var /mnt/var
    rsync -avxq /var/ /mnt/var/

    And finally we edit the system configuration:

    • vim /etc/fstab
      • point the / and /var mounts to the new LVM volumes
    • vim /boot/grub/grub.cfg
      • ensure kernel command line is updated to have “root=/dev/mapper/vg1-root”

    fstab:

    # / was on /dev/sda2 during installation
    #UUID=904300f1-5d90-4c10-908a-b8ac334bd021 /               ext4    errors=remount-ro 0       0
    /dev/mapper/vg1-root                      /               ext4    errors=remount-ro 0       0
    # /boot was on /dev/sda1 during installation
    UUID=9d8415e3-5d47-42e5-b169-ab0f5db14645 /boot           ext4    defaults,noatime        0       1
    
    #UUID=8de1d736-5b9e-44b1-ba6f-34984912889e /var            ext4    errors=remount-ro 0       1
    /dev/mapper/vg1-var                       /var            ext4    errors=remount-ro 0       1
    UUID=70554039-342d-4035-8182-ece5b032ec5b /home           ext4    errors=remount-ro 0       1

    grub.cfg:

    ### BEGIN /etc/grub.d/10_linux ###
    function gfxmode {
            set gfxpayload="${1}"
    }
    set linux_gfx_mode=
    export linux_gfx_mode
    menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-a5511b28-0df7-48c2-8565-baeaede58cfa' {
            load_video
            insmod gzio
            if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
            insmod part_msdos
            insmod ext2
            insmod lvm
            set root='hd0,msdos1'
            if [ x$feature_platform_search_hint = xy ]; then
              search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  9d8415e3-5d47-42e5-b169-ab0f5db14645
            else
              search --no-floppy --fs-uuid --set=root 9d8415e3-5d47-42e5-b169-ab0f5db14645
            fi
            echo    'Loading Linux 6.1.0-0.deb11.21-amd64 ...'
            linux   /vmlinuz-6.1.0-0.deb11.21-amd64 root=/dev/mapper/vg1-root ro rootdelay=10 net.ifnames=0 ixgbe.allow_unsupported_sfp=1 quiet 
            echo    'Loading initial ramdisk ...'
            initrd  /initrd.img-6.1.0-0.deb11.21-amd64
    }
    

    Now for the scary part – rebooting. Double-check that all configurations have also been applied to the new partitions and everything has been copied correctly. Make sure you have a way to recover should the system not reboot!

    Completing the initial conversion

    Assuming the reboot went well, the system should now be running on the new LVM Logical Volumes (LV’s).

    Now delete the /dev/sda2 and /dev/sda3 partitions, and create a new LVM PV in their place. Mount it and add it to the volume group, then remove the temporary PV:

    vgextend vg1 /dev/sda2
    pvmove /dev/sda6
    vgreduce vg1 /dev/sda6
    pvremove /dev/sda6

    Now we can delete the temporary /dev/sda6 partition.

    Moving the data

    Next we have to move the home partition. Again, this is a multi-step process as we first have to move the data out of the extended partition, then remove the extended partition.

    First create a new primary partition, /dev/sda3, large enough to hold the data. Make it a PV and add it to vg1 as before, then copy the data. Note that as before, ideally there should be no active users during the copy and any services which write into users’ home directories should be shut down.

    vgextend vg1 /dev/sda3
    lvcreate -n home -L 250G vg1 && mkfs.ext4 /dev/mapper/vg1-home
    mount /dev/mapper/vg1-home /mnt
    rsync -avxq /home/ /mnt/

    Now remount the new partition in place of the home partition. It may be better and safer to reboot the system instead to avoid any data loss or corruption, since lazy-unmounting a filesystem can lead to all sorts of edge cases.

    umount -l /home && mount /dev/mapper/vg1-home /home

    Next we delete all the logical partitions and the extended partition. The free data should now be between sda2 and sda3, which lets us increase the size of sda2. We can now increase the LVM PV to make use of the new space. Once we’ve done that, we can pvmove the data from the temporary Physical Volume and remove it:

    pvresize /dev/sda2
    pvmove /dev/sda3
    vgreduce /dev/sda3
    pvremove /dev/sda3

    Finally, we can delete the /dev/sda3 partition and add the remaining free space to the LVM Volume Group. From now on it is trivial to manage disk layout using LVM.

    Final Layout

    After all was said and done, the server ended up with a disk layout as follows:

    NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda            8:0    0  1.2T  0 disk 
    ├─sda1         8:1    0  237M  0 part /boot
    └─sda2         8:2    0  500G  0 part 
      ├─vg1-root 254:0    0    8G  0 lvm  /
      ├─vg1-var  254:1    0   20G  0 lvm  /var
      └─vg1-home 254:2    0  260G  0 lvm  /home

    For now I’ve left the LVM partition at 500GiB, which is double what the old disk was, and gives the various volumes plenty of room to grow.


  • PS5 Remote Play on Linux – Chiaki

    Chiaki – a remote-play client for PlayStation consoles.

    Overview

    While playing Undertale on my PS5, I got a bit frustrated at some of the trophies. Spending hours spamming the X button didn’t really feel like fun and rewording gameplay. A brief search later led me to discover a remote-play option called Chiaki for Linux. 10 minutes later I had it up-and-running! Impressive.

    The remote-play session can be controlled by keyboard, or a PS5 controlled can be connected to the PC. In my case, it was just plug-and-play.

    I tweaked the config a bit to use 1080p instead of the default 720p resolution, and to use hardware acceleration. I also added Chiaki to Steam and configured it to open Chiaki using gamemode as otherwise the screensaver was kicking in. Unfortunately the Steam overlay and screenshot facility is not working (yet).

    Worked absolutely brilliantly on my setup – gigabit LAN and a 1080p-based Debian GNU/Linux 12 desktop PC.

    It was then also rather trivial to script spamming the X button using xdotool..

    Issues

    Chiaki 2.2.0 – “Extended DualSense Support” crashes the remote play session, forcing a restart of the PS5 before remote play works again. To be fair, this feature is marked experimental.

    Remote Play of streaming content (eg, PlayStation Plus Premium classic games) shows a black screen, with the streamed content being displayed on the TV. Not sure if the official PlayStation remote play application has the same problem.

    Installation

    Core Installation

    The steps were pretty simple:

    1. Install Chiaki:
      1. apt-get install chiaki
    2. Retrieve account ID (NOT your login or username)
      1. I used a python script provided by the Chiaki developers.
      2. Here’s a reddit post describing an alternative quite convoluted approach (didn’t try it)
      3. And here’s a webpage which retrieves it – by far the easiest method! (This does NOT require you enter any login credentials, but does require your account to be publicly viewable.)
    3. Enter required data
      1. Account ID
      2. Code from the console
        1. Settings -> System -> Remote Play -> Link Device
    4. ?
    5. Profit!

    Optionally:

    • Add it to your Steam library
    • Run it using gamemode
    • Tweak configuration to use hardware acceleration and higher resolution

  • Undertale on PS5

    So I just played (and nearly completed) the cult-indy-hit Undertale on my PS5.

    Firstly, it’s an awesome little action adventure rpg thingy. If you haven’t played it, I can highly recommend it despite it’s rather old-skool looks. Quirky humour, interesting choices, and only a few hours long for a basic play-through although it has quite a lot of depth if you want to spend the time on it.

    It effectively combines puzzles and combat (via some nifty little mini-games) although exploration is quite limited. While there are some hidden areas, mostly it’s a linear story.

    My two main gripes are that there is no way to permanently speed up dialogue display and that it is very grindy to get money, which is required to get enough consumables for healing for the final fights if you’re not so good at those.

    Specifically on the PS5 port, whoever designed the trophies for this game really should go back to the drawing board; most of them are just filler and mind numbingly tedious repetition. It’s not even required to complete the game in order to Platinum it!
    Without going into spoilers, the game itself has plenty of opportunities for much better trophies which would properly reward the player. Somewhat amazed SIE approved half of these trophies!

    The game itself: 4/5
    The PS5 trophies: 2/5


  • WordPress and Piwigo? Yes please!

    So I just discovered the PiwigoPress plugin for WordPress.
    While it’s obsolete and the widget no longer works, the “short code” feature still does. Unfortunately it’s not very well documented, but it is possible to add pictures to an article which link back to not only the picture, but also the album which that picture is part of.
    Yayy.

    Trawling through the source code, it seems the following is possible:

    [PiwigoPress id={<pic_id>,<pic_id>,...} lnktype=albumpicture url='http://gallery.lemmurg.com/']
    Expand Parameters Table
    idPiwigo picture id(s)eg.
    id=1 – picture id 1
    id=1-5 – all pictures with ids 1 through 5
    id=1,3,4 – pictures with ids 1, 3, and 4
    lnktypepicture (default, link to picture only)
    album (?)
    albumpicture (link to picture with album)
    eg lnktype=albumpicture
    urlURL of the Piwigo siteeg: url=http://gallery.lemmurg.com
    sizeSize of the picture. Possible values:
    sq – square
    th – thumbnail
    xs – extra small
    sm – small
    me – medium
    la – large (default)
    xl – extra large
    xx – extra-extra large

    eg: size=sm
    nameAdds image name
    0 – no (default)
    1 – yes
    auto – ?
    eg: name=1
    descAdds image description.
    0 – no (default)
    1 – yes
    eg: desc=1
    class?
    style?
    lnktypepicture – link to picture only (default)
    albumpicture – link to picture with album
    album – ?
    example: lnktype=albumpicture
    opntypeWhether to open in the current tab or a new one.
    _blank – open in new tab (default)
    ordertype?
    random – random order (default)
    orderascWhether to sort pictures in ascending order.
    0 – no (default)
    1 – yes

    Additionally, it’s possible to control the layout of the embedded pictures by providing custom CSS for PiwigoPress tags as follows:

    .PiwigoPress_photoblog {
          display:inline-block;
          padding-right:10px;
     }
     .PWGP_shortcode {
          display:inline-block;
     }

  • Upgrading Nextcloud 15 to 19 on Debian …

    So my Debian 9 server was still running Nextcloud 15. Meanwhile Nextcloud 20 is out.

    When I looked at performing the (manual) update I actually found a Nextcloud 16 download already in place but it seems I never completed that. Not long afterwards I discovered why – Nextcloud 16 requires PHP 7.3, but Debian 9 only has PHP 7.0 available.

    Long story short, instead of chimera’ing my Debian install I bit the bullet and decided to finally upgrade the server to Debian 10

    Some time later…

    After the server upgrade completed I was able to use the Nextcloud web interface to upgrade to Nextcloud 16.. and 17… and 18… and 19… and 20!

    That’s were the fun stopped, many things were broken in NC20 (apps just showing blank pages), so, having taken a backup between every upgrade, I rolled back to NC19 (incidentally validating that my backups worked).

    Most things worked out of the box. Critically for me, Grauphel did not.

    Long story short, it turns out that on Debian 10, the version of the PHP OAuth package is actually not compatible with the installed version of PHP 7.3! Installing a binary-compatible package from the Debian package snapshots site fixed this.

    Amongst other things I did during the upgrade cycles was:

    • changed the database to 4-byte suppport allowing for more characters in paths and comments.
    • fixed several other minor PHP configuration issues which Nextcloud was warning about.
    • fixed support for Maps (Nextcloud bug in the upgrade scripts left some database columns misconfigured:
      • Column name "oc_maps_address_geo"."object_uri" is NotNull, but has empty string or null as default.
      • The fix was to manually edit the scripts.
    • wrote backup scripts backing up the Nextcloud directory, the database, and, optionally, the data directory.


  • Upgrading Debian 9 to Debian 10

    Triggered by needing to upgrade Nextcloud, I finally bit the bullet and decided to upgrade my virtually-hosted Debian server from Debian 9 “stretch” to Debian 10 “buster”.

    The upgrade, as usual, was fairly trivial:

    apt-get update
    apt-get upgrade
    <edit /etc/apt/sources.conf to point to the new version>
    apt-get update
    apt-get upgrade
    apt-get full-upgrade
    reboot

    There were various configuration files which needed tweaking during and after the upgrade. vimdiff was very useful. I also learned a new screen feature – split-screen! (Ctrl-a – |). Finally a shoutout to etckeeper for maintaining a full history of all edits made in /etc.

    Post-upgrade Issues and Gotchas

    dovecot (imap server)

    A huge issue was that I could no longer access my emails from anywhere.

    Turns out that dovecot was no longer letting me log in. The mail log file had numerous “Can’t load DH parameters” error entries. I had not merged in a required change to the ssl certificate configuration.

    exim4 (mail server)

    The second huge issue was that exim was no longer processing incoming mail. Turns out that spamd wasn’t started after the reboot. Fixed by:

    systemctl start spamassassin.service
    systemctl enable spamassassin.service

    shorewall (firewall)

    Another major gotcha: the shorewall firewalls were not automatically re-enabled, and it took me three days to notice. Yikes! I had left the server on sys-v init instead of systemctl and the upgrade had silently switched over. After restarting the firewall, use systemctl enable to configure it to start on bootup.

    systemctl start shorewall.service
    systemctl enable shorewall.service
    systemctl start shorewall6.service
    systemctl enable shorewall6.service

    bind9 (name server)

    Another item was that bind was no longer starting up – it needed a tweak to the apparmor configuration. Appears that on my server the log files are written to a legacy directory and the new default configuration prevented bind from writing into it and hence failing to start up.

    Miscellaneous

    • I finally removed dovecot spam from syslog by giving it its own logfiles (tweaking fail2ban accordingly).
    • Various PHP options needed tweaking and several new modules needed installing to support Nextcloud (manually installed so no dependency tracking).

    Later Updates

    • Discovered that phpldapadmin was broken. Manually downloaded and installed an updated version from “testing”.

  • New Scuba Toy – Shearwater Peregrine

    Just bought a Shearwater Peregrine as a backup for my Shearwater Perdix AI (budget didn’t quite stretch to a second Perdix..)

    Haven’t dived it yet, but following are my immediate unboxing impressions (pictures (via Google image search)).

    The good:

    • Familiar layout and the same great screen as the Perdix.
    • Smaller and lighter form factor than the Perdix.
    • Mostly full-featured recreational dive computer with some intro-to-tec features.
      • No AI/compass
      • Up to 100% O2 and 3 gases
      • Lots of “tec” displays
    • Built-in battery charging is via the Qi wireless standard, so absolutely no exposed contacts.
      • But due to wrist-straps/bungees it may be difficult to use generic Qi charging pads.
    • Dive download via Bluetooth (hopefully works better than the Perdix!)

    The bad:

    • Buttons are physical rather than the piezo-electric ones from the Perdix (subjective).
    • Battery is a built-in rechargeable battery (subjective).
    • Limited display customisation (as compared to the Perdix).
    • Screen protector is a standard thin protector rather than the thick gel-like one of the Perdix.

    The ugly:

    • The charging pad uses a micro-USB cable rather than USB-C (for a brand-new product, I would expect it to use the latest standards)
    • Still fairly large (although required to support the screen, the bezel _could_ be a bit smaller given the target market)
    • No compass or Air Integration (for the price-point, many competing products offer these features)