ZFS scrub with feedback that automatically ends when the scrub ends

So with the ZFS file system, you have the ability the check for data errors via sudo zpool scrub rpool; sudo zpool scrub bpool.

But in order to monitor it, you have to issue sudo zpool status, and if you want it to update every, say, 10 seconds, sudo zpool status 10.

Of course, the terminal window doesn't automatically close when the scrub is done... it'll just keep flipping the zpool status every 10 seconds. You have to press Ctrl-C to exit.

Until now...

gnome-terminal -- /bin/sh -c 'sudo zpool scrub bpool; sudo zpool scrub rpool; while zpool status rpool | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status; sleep 1; done; clear; sudo zpool status; sleep 15'

I set that up as a keyboard shortcut (Zorin menu > Settings > Keyboard Shortcuts, scroll all the way to the bottom and click the '+' button, then enter a name, the code above and your key combination (I used Super+S)).

Now all you have to do is hit Super+S, it'll scrub the drives, give you an update on the progress every 1 second, then exit once the scrub is complete.

I might as well put my other keyboard shortcuts here, for reference. I'm slowly working toward creating a script that sets up the computer the way I want all in one go... so if something goes catastrophically wrong, all I have to do is boot the Zorin OS USB stick, reinstall, run the script to set up the computer the way I want, and I'm back up and running. All my personal files are kept on external drives (on NTFS partitions so it's readable from both Windows and Linux... so if I have to bug out due to some catastrophe, I can just grab the drives and go), so I lose nothing when reinstalling.

Drive Cleanup (Super+C): gnome-terminal -- /bin/sh -c 'echo Running Stacer...; stacer; sleep 3; clear; echo Cleaning Chrome cache...; sudo rm /home/owner/.cache/chromium/Default/Cache/Cache_Data/* -vf; sudo rm /home/owner/.cache/chromium/Default/"Code Cache"/js/* -vf; sudo rm /home/owner/.cache/chromium/Default/"Code Cache"/wasm/* -vf; sudo rm /home/owner/.cache/chromium/Default/"Code Cache"/webui_js/* -vf; sleep 3; clear; echo Clearing snapshots...; sudo zfsflush.sh -s 1 -p bpool; sudo zfsflush.sh -s 1 -p rpool; sleep 3; clear; echo Clearing /var/backups; sudo rm /var/backups/* -f; sleep 3; clear; echo Clearing APT cache...; sudo sudo apt update; sudo apt autoremove; sudo apt autoclean; sudo apt clean; sleep 3; clear; echo Clearing Logs...; sudo journalctl --rotate; sleep 20; sudo journalctl -m --vacuum-files=0; sleep 5; sudo journalctl --rotate; sleep 20; sudo journalctl -m --verify --sync --flush --rotate --vacuum-time=1s; sleep 3; clear; echo Clearing thumbnails...; rm -rf ~/.cache/thumbnails/* sleep 10'

Flush ZFS Snapshots And Do Garbage Collection (Super+Z):
gnome-terminal -- /bin/sh -c 'zfsflush.sh -d 15 -p bpool; zfsflush.sh -d 15 -p rpool; sleep 5; sudo zsysctl -vv service gc -a; zfsprune.sh; sleep 5; zfsclean.sh; sleep 10'

(code for zfsflush.sh here)

(code for zfsclean.sh here)

(code for zfspurge.sh here)

You'd put zfsflush.sh, zfsclean.sh and zfspurge.sh into /usr/local/bin and make them executable via:
sudo chmod +x /usr/local/bin/zfsflush.sh
sudo chmod +x /usr/local/bin/zfsclean.sh
sudo chmod +x /usr/local/bin/zfspurge.sh

Reclaim Memory (Super+M):
gnome-terminal -- /bin/sh -c 'sudo sync; sleep 3; sudo sysctl -w vm.drop_caches=3; sleep 3; sudo sync'

Update All (Super+U):
gnome-terminal -- /bin/sh -c 'echo Updating; sleep 5; sudo apt update; sudo apt full-upgrade; sudo apt dist-upgrade; sudo apt-get upgrade; sudo fwupdmgr refresh --force; sudo fwupdmgr update; sudo flatpak update; sudo snap refresh; sudo apt autoremove; sudo apt autoclean; sudo apt purge; sudo apt update --fix-missing; sudo apt install --fix-broken; sleep 30'

ZFS Zero Free Space (Super+F):
NOTE: See here for new info.
gnome-terminal -- /bin/sh -c 'echo Detaching b6fc7ef9-d8a7-2d4a-810e-d58d4fbd7e13...; sudo zpool detach rpool b6fc7ef9-d8a7-2d4a-810e-d58d4fbd7e13; sleep 60; clear; echo Attaching b6fc7ef9-d8a7-2d4a-810e-d58d4fbd7e13...; sudo zpool attach rpool e5457ac9-50c8-48a1-adb1-a5b88e364349 b6fc7ef9-d8a7-2d4a-810e-d58d4fbd7e13; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching e5457ac9-50c8-48a1-adb1-a5b88e364349...; sudo zpool detach rpool e5457ac9-50c8-48a1-adb1-a5b88e364349; sleep 60; echo Attaching e5457ac9-50c8-48a1-adb1-a5b88e364349...; sudo zpool attach rpool b6fc7ef9-d8a7-2d4a-810e-d58d4fbd7e13 e5457ac9-50c8-48a1-adb1-a5b88e364349; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching cbabdc98-01...; sudo zpool detach rpool cbabdc98-01; sleep 60; echo Attaching cbabdc98-01...; sudo zpool attach rpool b6fc7ef9-d8a7-2d4a-810e-d58d4fbd7e13 cbabdc98-01; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; set zfs:zfs_initialize_value=0; sudo zpool initialize bpool a7de746a-65ed-0f4d-b1cc-3e5a1d0c62b6; sudo zpool initialize rpool b6fc7ef9-d8a7-2d4a-810e-d58d4fbd7e13 e5457ac9-50c8-48a1-adb1-a5b88e364349 cbabdc98-01 be91092b-1335-4890-a38c-ba720b20f247 3870024e-8c4d-456a-9a49-29db1093c929; sleep 15; while sudo zpool status | grep "initializing" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Clearing swap...; sudo swapoff -v /dev/sda2; sleep 60; sudo dd if=/dev/zero of=/dev/sda2 bs=512 status=progress; sleep 60; sudo mkswap /dev/sda2 -U de1a46a0-1e14-4158-b93f-a94971d01683; sleep 60; sudo swapon -a; clear; echo Finished... reboot to Zorin OS USB stick to do backup.; sleep 30'

You have to add the PARTUUID of each drive in the given pool, there's a bug in ZFS that causes it to throw an error if you don't. Detaching and re-attaching the drives forces zfs initialize to work properly. Without doing that, zfs initialize doesn't discard its status from its previous run and thinks it's done, so it doesn't zero any sectors. You also have to add the UUID of your swap drive.

Backup To Zipped .IMG file:
clear; echo "NOTE: THIS SHOULD ONLY BE RUN WHEN BOOTED FROM THE ZORIN OS USB STICK AND WITH ALL SOURCE DRIVE PARTITIONS UNMOUNTED!"; echo "$(read -r -p "Press Enter to continue..." key)"; clear; echo "Are you certain that all partitions on the source drive are unmounted?\nStarting Disks application... please wait..."; sleep 3; gnome-disks; clear; echo "$(read -r -p "Press Enter to continue..." key)"; clear; echo "Backing up. Please wait...\nEnter source drive (for instance: sda)..."; read source; echo "Enter destination path (for instance: /media/zorin/Storage/) ONLY... file is automatically named."; read destination; clear; echo "Backing up. Please wait..."; sudo dd if="/dev/"$source ibs=512 obs=512 iflag=fullblock,nonblock,noatime oflag=direct conv=noerror status=progress | sudo 7z a -mx9 -bd -si -mmt12 "$destination""$source""_""$(date +%Y-%m-%d_%H%M).img.7z"; echo "$(read -r -p "Backup complete. Press Enter to exit..." key)"; exit

[EDIT]
I had attempted to improve the Backup To Zipped .IMG File script a bit by automatically determining output block size. That messed things up a bit (it somehow discarded the last blocks at the end of the drive), so I've gone back to the simpler-is-better version.