ZFS, zero drives to get better backup .IMG file compression... while running Linux

This also works for those running VMs... VM size grows over time due to data that's left behind on now-unused sectors, zeroing those sectors allows the VM to be more like a 'sparse file', which reduces VM size.

This also works for those worried about potentially sensitive data left behind on now-unused sectors, even if the drive is encrypted (encryption can be broken... best to zero those now-unused sectors).

Ok, so I've mentioned in another post using zpool initialize to zero unused sectors via:
gnome-terminal -- /bin/sh -c 'set zfs:zfs_initialize_value=0; sudo zpool initialize bpool d7335f16-9bd1-1c4d-88b9-e952441dd227; sudo zpool initialize rpool 965d0a40-cce9-664d-8f4a-04c8075238c4 b34bba5d-f7ed-4d3e-95b5-47fd750e05f6 1a7428f8-4950-c248-b947-d8b817a0cd5a b5fd0c2c-0f02-9942-8576-d7b0b851fef1; while sudo zpool status | grep "initializing" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15'

... the code that triggers the initialize is still buggy, however... you have to include the PARTUUID of each drive in the pool that you want to initialize (otherwise it says something like "Error: Cannot initialize rpool. 'hole' device not available".), and it doesn't really zero every bit of previously-used but now-unused space. It keeps a tally of where it's zero'd, and only works on areas that have changed.

So... let's brute-force zero every single sector of a hard drive (except for the EFI and bpool partitions, but they're small, so it shouldn't affect the size of the resultant compressed .IMG file much) so we can get really good compression ratios when doing a backup to a compressed .IMG file... and let's do it while running Zorin OS (ie: not booted from the Zorin OS boot USB stick).

To do this, you need a mirrored rpool... at least two drives running in parallel.

The conditions on my computer:


ZFS rpool is /dev/sda4 (PARTUUID: 965d0a40-cce9-664d-8f4a-04c8075238c4)

Mirror drive is /dev/sdb1 (PARTUUID: b34bba5d-f7ed-4d3e-95b5-47fd750e05f6)

Swap drive is /dev/sda2 (UUID: 46c1a133-bfdd-4695-a484-08fcf8286896)


Detach first drive from rpool: sudo zpool detach rpool 965d0a40-cce9-664d-8f4a-04c8075238c4

Ensure rpool is still good: sudo zpool status

Zero the drive: sudo dd if=/dev/zero of=/dev/sda4 bs=512 status=progress

Attach the zero'd drive to rpool: sudo zpool attach rpool b34bba5d-f7ed-4d3e-95b5-47fd750e05f6 965d0a40-cce9-664d-8f4a-04c8075238c4

Allow the automatic resilver to complete, monitor it with: sudo zpool status 5

Scrub rpool: sudo zpool scrub rpool
And monitor it with: sudo zpool status 5

Detach second drive from rpool: sudo zpool detach rpool b34bba5d-f7ed-4d3e-95b5-47fd750e05f6

Ensure rpool is still good: sudo zpool status

Zero the drive: sudo dd if=/dev/zero of=/dev/sdb1 bs=512 status=progress

Attach the zero'd drive to rpool: sudo zpool attach rpool 965d0a40-cce9-664d-8f4a-04c8075238c4 b34bba5d-f7ed-4d3e-95b5-47fd750e05f6

Allow the automatic resilver to complete, monitor it with: sudo zpool status 5

Scrub rpool: sudo zpool scrub rpool
And monitor it with: sudo zpool status 5

Unmount swap partition: sudo swapoff -v /dev/sda2

Zero swap partition: sudo dd if=/dev/zero of=/dev/sda2 bs=512 status=progress

Set up swap partition: sudo mkswap /dev/sda2 -U 46c1a133-bfdd-4695-a484-08fcf8286896 <== The original UUID of the swap partition

Mount swap partition: sudo swapon -a


Zeroing the sectors on the swap drive resets its UUID. Setting the UUID as done above means you don't have to mess with your /etc/fstab file, if you've set up your swap partition to be mounted like:
/dev/disk/by-uuid/46c1a133-bfdd-4695-a484-08fcf8286896 none swap sw,noatime 0 0


Then boot into the Zorin OS USB and create a .IMG file backup of the internal drive, compressing it with 7z.

I did an experiment... I backed up my internal drive (1 TB) to a .img.7z file (using dd chained to 7z) before doing the above, then again after. The 'before' .img.7z file is 8.4 GB in size... the 'after' .img.7z file is 2.8 GB.

ZFS is neat, and they call the bpool (boot pool) and rpool (root pool) 'pools' for a reason... because you can slosh data back and forth in that pool (to different drives) while up and running. As shown above, you can even zero the sectors of a partition while up and running.

The main advantages of using a mirrored rpool is that you increase your read speed (to increase your write speed, you'd use mirrored SLOG drives), you have data redundancy, and you can take one of the drives out of the pool without losing data... which means if you have a drive fail, you can detach that drive from the pool, shut down, put a new drive in, boot the machine, reattach the new drive to the pool and be up and running again in short order. If you've got a sophisticated enough computer, you can even do that without having to shut down (hot-swap capability).

1 Like

You can also do:
Detach first drive from rpool: sudo zpool detach rpool {Drive 1 PARTUUID}

Ensure rpool is still good: sudo zpool status

Attach first drive to rpool: sudo zpool attach rpool {Drive 2 PARTUUID} {Drive 1 PARTUUID}

Allow the automatic resilver to complete, monitor it with: sudo zpool status 5

Scrub rpool: sudo zpool scrub rpool
And monitor it with: sudo zpool status 5

Detach second drive from rpool: sudo zpool detach rpool {Drive 2 PARTUUID}

Ensure rpool is still good: sudo zpool status

Attach second drive to rpool: sudo zpool attach rpool {Drive 1 PARTUUID} {Drive 2 PARTUUID}

Allow the automatic resilver to complete, monitor it with: sudo zpool status 5

Scrub rpool: sudo zpool scrub rpool
And monitor it with: sudo zpool status 5

Doing the above apparently deletes the progress data that zpool initialize keeps, so it starts over from the beginning. If you don't delete that progress data, on subsequent runs zpool initialize believes it's already done, and exits quickly, so sectors don't get zero'd.

Then run:
gnome-terminal -- /bin/sh -c 'set zfs:zfs_initialize_value=0; sudo zpool initialize bpool {bpool PARTUUID}; sudo zpool initialize rpool {rpool Drive 1 PARTUUID} {rpool Drive 2 PARTUUID} {rpool Drive 3 (SLOG) PARTUUID} {rpool Drive 4 (SLOG) PARTUUID}; while sudo zpool status | grep "initializing" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15'
... to zero the rpool free space.


Record the UUID (not PARTUUID) of the swap partition for later use:
sudo blkid

Unmount swap partition: sudo swapoff -v /dev/sda2 <== The swap partition

Zero swap partition: sudo dd if=/dev/zero of=/dev/sda2 bs=512 status=progress

Set up swap partition: sudo mkswap /dev/sda2 -U {Swap Partition UUID} <== The original UUID of the swap partition

Mount swap partition: sudo swapon -a

Zeroing the sectors on the swap drive resets its UUID. Setting the UUID as done above means you don't have to mess with your /etc/fstab file, if you've set up your swap partition to be mounted like:
/dev/disk/by-uuid/{Swap Partition UUID} none swap sw,noatime 0 0

Now boot into Zorin OS USB stick and run BackupToZip.sh.

1 Like

I've created a script that does all of the above... if you want to use it, you'll have to edit it to reflect your drive UUIDs, PARTUUIDs and device paths.

It's here. I've set it up as a keyboard shortcut in Zorin menu > Settings > Keyboard Shortcuts.

[EDIT]
I've since added another mirror drive to the rpool... the script quickly gets complicated and lengthy (and takes a long time to run) with multiple drives. I've got a feature request in with OpenZFS to automatically erase the zpool initialize progress data once it's finished initializing, so it can be run multiple times, as that has several advantages... it reduces the size of VMs, it erases potentially sensitive data sitting on now-unused sectors, and it makes your compressed backup .IMG files a lot smaller.

With all the stuff I strip out on a new install, and with zeroing the drives via zpool initialize, the backup .img file now compresses to a mere 2.2 GB for the 1 TB internal drive.

Job and life intervened for awhile, but I'm back.

I've updated the script above. I'm now running Zorin Core OS 16.3 with kernel Linux 5.15.0-83-generic x86_64. I've yet to switch to the low-latency kernel.

I bought three new 7200 RPM spinning-rust drives with 128 MB of cache, and set up ZFS so the bpool and rpool each had 3 mirrored drives, and there are three swap drives... I just booted the Zorin OS BootUSB, then installed Zorin OS to each of the new drives (to get identical drive setups), then when I'd booted the installed version, I attached the bpool and rpool mirror drives, and set up fstab for the swap drives.

During all that, I figured out how to use dd to zero the swap drives using the PartUUID of each partition, rather than the /dev/sd? nomenclature, so even if a drive changes drive letters, the zeroing of the swap partitions will still work. That's at the end of the script.

As such, the script to zero the free space is more complex...

ZFS Zero Free Space: Super+Z: gnome-terminal -- /bin/sh -c 'echo Detaching bpool afcc4781-fd49-1c4f-852b-081d0fe90de4...; sudo zpool detach bpool afcc4781-fd49-1c4f-852b-081d0fe90de4; sleep 60; echo Attaching bpool afcc4781-fd49-1c4f-852b-081d0fe90de4...; sudo zpool attach bpool a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b afcc4781-fd49-1c4f-852b-081d0fe90de4; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching bpool a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b...; sudo zpool detach bpool a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b; sleep 60; echo Attaching bpool a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b...; sudo zpool attach bpool 2b7d5c58-794b-6c4a-8291-0070abe0957d a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching bpool 2b7d5c58-794b-6c4a-8291-0070abe0957d...; sudo zpool detach bpool 2b7d5c58-794b-6c4a-8291-0070abe0957d; sleep 60; echo Attaching bpool 2b7d5c58-794b-6c4a-8291-0070abe0957d...; sudo zpool attach bpool afcc4781-fd49-1c4f-852b-081d0fe90de4 2b7d5c58-794b-6c4a-8291-0070abe0957d; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching rpool 92be2281-e48d-7a4f-b41a-4fe5a7696f45...; sudo zpool detach rpool 92be2281-e48d-7a4f-b41a-4fe5a7696f45; sleep 60; echo Attaching rpool 92be2281-e48d-7a4f-b41a-4fe5a7696f45...; sudo zpool attach rpool 5cd72a1a-5e56-3642-94c1-1a893ad5210a 92be2281-e48d-7a4f-b41a-4fe5a7696f45; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching rpool 5cd72a1a-5e56-3642-94c1-1a893ad5210a...; sudo zpool detach rpool 5cd72a1a-5e56-3642-94c1-1a893ad5210a; sleep 60; echo Attaching rpool 5cd72a1a-5e56-3642-94c1-1a893ad5210a...; sudo zpool attach rpool 20159379-a613-b148-a09c-ecad76e64823 5cd72a1a-5e56-3642-94c1-1a893ad5210a; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching rpool 20159379-a613-b148-a09c-ecad76e64823...; sudo zpool detach rpool 20159379-a613-b148-a09c-ecad76e64823; sleep 60; echo Attaching rpool 20159379-a613-b148-a09c-ecad76e64823...; sudo zpool attach rpool 92be2281-e48d-7a4f-b41a-4fe5a7696f45 20159379-a613-b148-a09c-ecad76e64823; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; set zfs:zfs_initialize_value=0; sudo zpool initialize bpool afcc4781-fd49-1c4f-852b-081d0fe90de4; sudo zpool initialize bpool a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b; sudo zpool initialize bpool 2b7d5c58-794b-6c4a-8291-0070abe0957d; sudo zpool initialize rpool 92be2281-e48d-7a4f-b41a-4fe5a7696f45; sudo zpool initialize rpool 5cd72a1a-5e56-3642-94c1-1a893ad5210a; sudo zpool initialize rpool 20159379-a613-b148-a09c-ecad76e64823; sleep 15; while sudo zpool status | grep "initializing" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Clearing swap...; sudo swapoff -a; sleep 60; sudo dd if=/dev/zero of=/dev/disk/by-partuuid/8b86247b-8114-674b-ac12-9d7d9f726614 bs=512 status=progress; sleep 60; sudo mkswap /dev/disk/by-partuuid/8b86247b-8114-674b-ac12-9d7d9f726614 -U 21ebe95a-cdd6-40d1-b5a2-ff44a768b47d; sleep 60; sudo dd if=/dev/zero of=/dev/disk/by-partuuid/55491090-1362-1444-9f3b-25cb9a59ae1f bs=512 status=progress; sleep 60; sudo mkswap /dev/disk/by-partuuid/55491090-1362-1444-9f3b-25cb9a59ae1f -U c39fbec2-4aa6-4255-b6c3-e9540b397713; sleep 60; sudo dd if=/dev/zero of=/dev/disk/by-partuuid/5e435c67-aa38-784e-aa1e-6193591ad782 bs=512 status=progress; sleep 60; sudo mkswap /dev/disk/by-partuuid/5e435c67-aa38-784e-aa1e-6193591ad782 -U d4398e10-8183-4a5a-88a6-9d830a6f2a6d; sleep 60; sudo swapon -a; clear; echo Finished... reboot to Zorin OS USB stick to do backup.; sleep 30'

For some reason, the computer put the third drive as /dev/sdf, rather than /dev/sdc... no matter, I'm using UUID and PartUUID.

sudo blkid:

/dev/sda1: UUID="B2C2-C122" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="c0a607f2-4891-436c-96dc-6084f6e1fb69"

/dev/sda2: UUID="21ebe95a-cdd6-40d1-b5a2-ff44a768b47d" TYPE="swap" PARTUUID="8b86247b-8114-674b-ac12-9d7d9f726614"

/dev/sda3: LABEL="bpool" UUID="10724362119177157490" UUID_SUB="2744633821596673580" TYPE="zfs_member" PARTUUID="afcc4781-fd49-1c4f-852b-081d0fe90de4"

/dev/sda4: LABEL="rpool" UUID="14392359408320664089" UUID_SUB="15634694141526690318" TYPE="zfs_member" PARTUUID="92be2281-e48d-7a4f-b41a-4fe5a7696f45"
---------------
/dev/sdb1: UUID="01ED-FF42" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="672de606-823a-4b69-a676-4ff733353ee9"

/dev/sdb2: UUID="c39fbec2-4aa6-4255-b6c3-e9540b397713" TYPE="swap" PARTUUID="55491090-1362-1444-9f3b-25cb9a59ae1f"

/dev/sdb3: LABEL="bpool" UUID="10724362119177157490" UUID_SUB="15119122305742970831" TYPE="zfs_member" PARTUUID="a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b"

/dev/sdb4: LABEL="rpool" UUID="14392359408320664089" UUID_SUB="475542767772841823" TYPE="zfs_member" PARTUUID="5cd72a1a-5e56-3642-94c1-1a893ad5210a"
---------------
/dev/sdc1: LABEL="Storage1" UUID="51D5FFB655290F5B" TYPE="ntfs" PTTYPE="dos" PARTUUID="72747aed-ab88-4280-8359-3efd1faec448"
---------------
/dev/sdd1: LABEL="Storage2" UUID="30DB62F21CA43816" TYPE="ntfs" PTTYPE="dos" PARTUUID="dd5b51cd-4429-465f-af4e-dec7909919fc"
---------------
/dev/sde1: LABEL="Ventoy" UUID="64F7-801D" TYPE="exfat" PTTYPE="dos" PARTLABEL="Ventoy" PARTUUID="ae70ad41-9f03-45d3-7493-63ecdd58d2e0"

/dev/sde2: SEC_TYPE="msdos" LABEL_FATBOOT="VTOYEFI" LABEL="VTOYEFI" UUID="440A-1007" TYPE="vfat" PARTLABEL="VTOYEFI" PARTUUID="7665c650-8a74-dafc-0a37-92461a4c1a17"
---------------
/dev/sdf1: UUID="1B0B-0AE5" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="7a1daefd-6401-4f00-b04a-51e6c7014c97"

/dev/sdf2: UUID="d4398e10-8183-4a5a-88a6-9d830a6f2a6d" TYPE="swap" PARTUUID="5e435c67-aa38-784e-aa1e-6193591ad782"

/dev/sdf3: LABEL="bpool" UUID="10724362119177157490" UUID_SUB="1455409066216568357" TYPE="zfs_member" PARTUUID="2b7d5c58-794b-6c4a-8291-0070abe0957d"

/dev/sdf4: LABEL="rpool" UUID="14392359408320664089" UUID_SUB="11094338582255012156" TYPE="zfs_member" PARTUUID="20159379-a613-b148-a09c-ecad76e64823"
---------------
/dev/sdg: LABEL="OldRB" UUID="0AE59F0D24B05832" TYPE="ntfs" PTTYPE="dos"

Complicated, you weren't kidding!

Great to have you back @Mr_Magoo

1 Like

The zeroing of drive free space really makes a difference when you compress the .img files of each drive, taken as backups... for the three 500.1 GB drives, they compress to 7.6 GB total for all three combined.

"Why only 500 GB?", you ask... because the old drives were 1 TB, and weren't even 2% filled, and the drives I wanted didn't come any smaller than 500 GB.

There are still some problems... I don't have the low-latency kernel installed yet, and it doesn't appear as though the OS is kicking the USB ports into USB 3.1 mode, despite everything (USB ports, USB hubs and USB drives) being USB 3.1 compatible.

I'm working on getting the low-latency kernel installed now.

[UPDATE]
Ok, I've got the low-latency kernel installed:

uname -a
Linux HP-Laptop 5.15.0-83-lowlatency #92~20.04.1-Ubuntu SMP PREEMPT Wed Aug 23 16:36:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

grep PREEMPT_DYNAMIC /boot/config-$(uname -r)
CONFIG_PREEMPT_DYNAMIC=y CONFIG_HAVE_PREEMPT_DYNAMIC=y

One problem still remaining is one I've run across before... this kernel limits CPU frequency to between 1330 and 2300 MHz, but it's supposed to be able to range from 400 MHz to 4400 MHz. I'll work on that.

1 Like

I've updated the code to zero ZFS free space so it's easier for a person to set it up.

Now, all one must do is run sudo blkid and enter in the relevant PartUUIDs and UUIDs at the top of the code... the rest of the code (below echo =========================;) isn't changed.

gnome-terminal -- /bin/sh -c 'bpool1PartUUID=cd2f0217-f65e-d64b-8e3a-d5622c27318c; bpool2PartUUID=2b7d5c58-794b-6c4a-8291-0070abe0957d; bpool3PartUUID=a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b; rpool1PartUUID=19bf24c9-36d2-4e41-a24d-585afea57f6f; rpool2PartUUID=20159379-a613-b148-a09c-ecad76e64823; rpool3PartUUID=5cd72a1a-5e56-3642-94c1-1a893ad5210a; swap1PartUUID=944d2bbc-f9b1-1347-ba16-7f8463997586; swap1UUID=f51c1888-c76e-4f16-b25e-2ba99b697696; swap2PartUUID=5e435c67-aa38-784e-aa1e-6193591ad782; swap2UUID=d4398e10-8183-4a5a-88a6-9d830a6f2a6d; swap3PartUUID=55491090-1362-1444-9f3b-25cb9a59ae1f; swap3UUID=c39fbec2-4aa6-4255-b6c3-e9540b397713; echo =========================; echo Detaching bpool $bpool1PartUUID; sudo zpool detach bpool $bpool1PartUUID; sleep 60; echo Attaching bpool $bpool1PartUUID; sudo zpool attach bpool $bpool2PartUUID $bpool1PartUUID; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching bpool $bpool2PartUUID; sudo zpool detach bpool $bpool2PartUUID; sleep 60; echo Attaching bpool $bpool2PartUUID; sudo zpool attach bpool $bpool3PartUUID $bpool2PartUUID; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching bpool $bpool3PartUUID; sudo zpool detach bpool $bpool3PartUUID; sleep 60; echo Attaching bpool $bpool3PartUUID; sudo zpool attach bpool $bpool1PartUUID $bpool3PartUUID; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub bpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching rpool $rpool1PartUUID; sudo zpool detach rpool $rpool1PartUUID; sleep 60; echo Attaching rpool $rpool1PartUUID; sudo zpool attach rpool $rpool2PartUUID $rpool1PartUUID; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching rpool $rpool2PartUUID; sudo zpool detach rpool $rpool2PartUUID; sleep 60; echo Attaching rpool $rpool2PartUUID; sudo zpool attach rpool $rpool3PartUUID $rpool2PartUUID; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Detaching rpool $rpool3PartUUID; sudo zpool detach rpool $rpool3PartUUID; sleep 60; echo Attaching rpool $rpool3PartUUID; sudo zpool attach rpool $rpool1PartUUID $rpool3PartUUID; sleep 15; while sudo zpool status | grep "resilver in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sleep 10; sudo zpool events -c; sudo zpool scrub rpool; while sudo zpool status | grep "scan: *scrub in progress" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; set zfs:zfs_initialize_value=0; sudo zpool initialize bpool $bpool1PartUUID; sudo zpool initialize bpool $bpool2PartUUID; sudo zpool initialize bpool $bpool3PartUUID; sudo zpool initialize rpool $rpool1PartUUID; sudo zpool initialize rpool $rpool2PartUUID; sudo zpool initialize rpool $rpool3PartUUID; sleep 15; while sudo zpool status | grep "initializing" > /dev/null; do clear; sudo zpool status -Td; sleep 2; done; clear; sudo zpool status -Td; sleep 15; clear; echo Clearing swap...; sudo swapoff -a; sleep 60; sudo dd if=/dev/zero of=/dev/disk/by-partuuid/$swap1PartUUID bs=512 status=progress; sleep 60; sudo mkswap /dev/disk/by-partuuid/$swap1PartUUID -U $swap1UUID; sleep 60; sudo dd if=/dev/zero of=/dev/disk/by-partuuid/$swap2PartUUID bs=512 status=progress; sleep 60; sudo mkswap /dev/disk/by-partuuid/$swap2PartUUID -U $swap2UUID; sleep 60; sudo dd if=/dev/zero of=/dev/disk/by-partuuid/$swap3PartUUID bs=512 status=progress; sleep 60; sudo mkswap /dev/disk/by-partuuid/$swap3PartUUID -U $swap3UUID; sleep 60; sudo swapon -a; clear; echo Finished... reboot to Zorin OS USB stick to do backup.; sleep 30'

This is for a dual drive setup or multi-partition? I ask, because you could ask for the input of the UUIDs, save them to file, check for the file or ask for input otherwise.

Then you can wrap the unloads and loads in an exception block to catch any issues and ask for clarification (UUID input wrong or doesn't exist).

This would keep people from having to modify the script and it would still perform as expected.

There are three identical drives (500 GB, 128 MB cache, 7200 RPM... one internal, two external) on three separate interfaces (on three different USB hubs built-in to the motherboard)... basically what I did was boot the Zorin OS BootUSB, then installed Zorin OS to each of the 3 drives, so I had identical setups on each.

Then I booted from the internal drive, then added the bpool and rpool partitions on the external drives to the respective bpool and rpool for the booted OS. That gives me about 820 MB/s maximum data throughput, which is up there with an SSD, without having to worry about write-wearing the drive.

I have two backup EFI partitions... right now I manually copy the internal drive's EFI partition over to the two external drives EFI partitions, but I'm considering some code that'll do it automatically at each boot to the graphical shell... that way, if the boot stalls, the two backups aren't overwritten, and I have them to copy back over to the internal drive EFI partition if need be.

I've got room for an NVME drive (3500 MB/s), I'm considering buying one and setting that as the primary drive, then the internal and two external drives as mirrors... that'd be wicked fast, and I'd have the data redundancy in case the NVME drive gets write-worn... I just have to pop the old drive out, pop a new drive in, clone the backup of the old NVME drive onto the new drive and I'm back up and running.

1 Like