Another speed-up... NUMA Balancing

First, record what your current boot looks like:
sudo systemd-analyze critical-chain # Boot Chain Analysis

sudo echo 1 > /proc/sys/kernel/numa_balancing

In /etc/default/grub, enter:
GRUB_CMDLINE_LINUX_DEFAULT="numa=on"

Reboot, then:
sudo systemd-analyze critical-chain # Boot Chain Analysis

It shaved 3 seconds off my boot time to run level 5 (the desktop), and the desktop comes up quicker with less of a dark-screen lag between the console boot logging and the desktop. I'm now down to 19.621 seconds.

This is Non-Uniform Memory Access balancing... moving code to the node where it's got fastest access to a CPU. Each core of my CPU is a node and the memory is a node, it shuffles things about to get the shortest distance between core and code.

grep NUMA=y /boot/config-'uname -r'

The forum auto-formatting messes that up... don't use the single quote ('), use the slanted single quote (`).

CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_ACPI_NUMA=y

lscpu | grep -i numa
NUMA node(s): 1
NUMA node0 CPU(s): 0-11

Where this really shines is if you've got several servers (such as IBM 3950s) connected together to act as a single server, or a multi-processor server. But even with a single processor (which can enumerate each core as a NUMA node), it helps.

Now that I've got two identical 32 GB memory sticks, the machine does memory interleaving, which provides another speed-up (especially with NUMA enabled). Before, I had one 8 GB card, and one 4 GB card (from the factory), so memory interleaving couldn't take place.

graphical.target @18.244s
└─multi-user.target @18.243s
  └─networkd-dispatcher.service @12.599s +5.642s
    └─basic.target @12.397s
      └─sockets.target @12.397s
        └─zsysd.socket @12.397s
          └─sysinit.target @12.270s
            └─apparmor.service @11.601s +668ms
              └─local-fs.target @11.600s
                └─run-user-1000-gvfs.mount @21.900s
                  └─run-user-1000.mount @19.167s
                    └─swap.target @9.589s
                      └─dev-disk-by\x2duuid-21ebe95a\x2dcdd6\x2d40d1\x2db5a2\x2dff44a768b47d.swap @9.580s +6ms
                        └─dev-disk-by\x2duuid-21ebe95a\x2dcdd6\x2d40d1\x2db5a2\x2dff44a768b47d.device @9.578s

And I haven't even gotten my USB ports running at USB 3.1 speeds yet, so the mirrored drives aren't contributing much to boot data throughput. Nor have I gotten the AMD Ryzen 5 5625 U CPU to do TurboBoost yet, so that'll be another speed-up.

[EDIT]
Slowly whittling it down...

graphical.target @15.843s
└─udisks2.service @11.516s +4.325s
  └─basic.target @11.317s
    └─sockets.target @11.316s
      └─zsysd.socket @11.316s
        └─sysinit.target @11.222s
          └─apparmor.service @10.638s +583ms
            └─local-fs.target @10.637s
              └─run-user-1000-gvfs.mount @18.971s
                └─run-user-1000.mount @15.851s
                  └─swap.target @8.987s
                    └─dev-disk-by\x2duuid-21ebe95a\x2dcdd6\x2d40d1\x2db5a2\x2dff44a768b47d.swap @8.979s +6ms
                      └─dev-disk-by\x2duuid-21ebe95a\x2dcdd6\x2d40d1\x2db5a2\x2dff44a768b47d.device @8.978s

[EDIT 2]
Another slight improvement, I disabled ModemManager.service, which one only needs if one is tethering via Bluetooth to their cell phone, using dial-up, etc.

The WiFi hotspot from my cell phone to my computer still works, as does the USB connection.

graphical.target @14.516s
└─udisks2.service @11.091s +3.424s
  └─basic.target @10.922s
    └─sockets.target @10.922s
      └─zsysd.socket @10.921s
        └─sysinit.target @10.830s
          └─apparmor.service @10.236s +592ms
            └─local-fs.target @10.235s
              └─run-user-1000.mount @16.199s
                └─swap.target @8.883s
                  └─dev-disk-by\x2duuid-21ebe95a\x2dcdd6\x2d40d1\x2db5a2\x2dff44a768b47d.swap @8.856s +24ms
                    └─dev-disk-by\x2duuid-21ebe95a\x2dcdd6\x2d40d1\x2db5a2\x2dff44a768b47d.device @8.854s
1 Like

Have you yet tested

sudo systemctl disable systemd-networkd.service

If it helps, you can make it permanent.

Already disabled.

I'm looking at ways of speeding up udisks2.service:
tput rev;read -p "Package? " in;tput sgr0;sudo dpkg -L $in | xargs which # Show which commands belong to a package

Package? udisks2

/sbin/umount.udisks2
/usr/bin/udisksctl
/usr/lib/udisks2/udisks2-inhibit
/usr/lib/udisks2/udisksd

... but  udisksctl  is pretty sparse, not a lot to configure there.

I've got 7 physical drives with a total of 17 partitions, so it takes a bit to get them all enumerated.

I'm going to try enumerating all the drives by UUID in fstab, that way udisks2 doesn't have to scan for and enumerate them.

[EDIT]
Nope, that actually added 3 seconds to the boot time... best to just let udisks do what it does, I guess.

I set it back to automatically importing most of the drives (except for the EFI boot drive and the 3 swap drives).

Oddly, it's now showing more stuff:
sudo systemd-analyze critical-chain

graphical.target @14.621s
└─udisks2.service @11.308s +3.312s
  └─basic.target @11.141s
    └─sockets.target @11.140s
      └─zsysd.socket @11.140s
        └─sysinit.target @11.066s
          └─apparmor.service @10.515s +550ms
            └─local-fs.target @10.514s
              └─zfs-mount.service @10.495s +17ms
                └─boot.mount @10.427s +42ms
                  └─zfs-import.target @10.426s
                    └─zfs-import-cache.service @8.955s +1.470s
                      └─zfs-load-module.service @8.942s +2ms
                        └─systemd-udev-settle.service @3.032s +5.704s
                          └─systemd-udev-trigger.service @2.528s +382ms
                            └─systemd-udevd-kernel.socket @2.413s
                              └─system.slice @1.617s
                                └─-.slice @1.617s

Eh, more stuff for me to tweak, I guess.

1 Like