Any problems foreseen with this hardware?

{sigh} Ever get the feeling that the universe is just sadistically messing with you?

I guess it was too much to ask of Insyde (the maker of the BIOS in this computer) to make a Windows program to burn the BIOS update to USB that doesn't crash out when it brings up the file browser... when running under Windows. I tried all the Windows Compatibility settings, no joy.

I'll have to do some more research and experimentation, and put together my own USB stick to get the update done.

2 Likes

They are geared toward Windows.
It's the dominant OS and in their mind, Linux secures less than 3% of the market.

https://www.dell.com/support/kbdoc/en-us/000131486/update-the-dell-bios-in-a-linux-or-ubuntu-environment

Some manufacturers have begun cooperating with Linux and supplying updates that can be run directly from within your Linux OS. They are listed using demidecode
https://raelcunha.com/2015/12/23/updating-bios-with-ubuntu/

1 Like

Yeah, I borrowed my wife's Win11 computer to download the HP firmware file, but the program crashes and exits every time it brings up the file browser (what one would use, if it ran properly, to choose the USB stick to burn to).

Come to find out the UEFI/BIOS update program crashing on my wife's Win11 computer is due to all the security 'features' on Windows 11... which is ironic... the update program is supposedly designed for Windows 11.

My wife won't let me disable all those 'features' to make another attempt on her computer.

Another round of attempts:

I burned Ventoy to a USB stick, then downloaded Hiren's Boot CD (Windows XP), Windows 7 PE and Win10 PE. The great thing about Ventoy is that after you've burned it, you just drop the .ISO files you want onto that USB drive, then when you boot, it gives you a choice of which .ISO file to boot up.

I tried WinXP... the UEFI/BIOS updater wouldn't even run.

I tried Win7... the UEFI/BIOS updater wouldn't even run.

I tried Win10... and I was able to burn the UEFI/BIOS update to a memory stick!

I did all that on my old laptop (also now running Zorin Core OS). I then carried the USB stick over to the new computer, shut the computer down, unplugged all the USB drives, plugged in the UEFI/BIOS update USB stick, booted, hit F9, selected the "Boot from EFI" option... and it gave me the error that signature verification failed and I should check my Secure Boot settings... Secure Boot is enabled, with the factory-default keys. I tried it with Secure Boot enabled and disabled... no joy.

Ah, well... HP is known for making good hardware and atrocious software... I guess that extends to their UEFI/BIOS update programs.

2 Likes

I got a 4TB spinning-rust hard drive. I created a partition on it with the same exact size down to the byte as the internal drive and with the same partition type. I then added that partition to the rpool as a mirror using the PARTUUID:

sudo blkid gives you the PARTUUID of all the hard drive partitions.

/dev/sdb1: LABEL="rpool" UUID="1542370796205579292" UUID_SUB="4559413643218391176" TYPE="zfs_member" PARTUUID="9044761d-9a05-435e-8681-050da7312951"

sudo zpool attach rpool 5f52e75c-505f-9941-a9c4-da071f9836f0 9044761d-9a05-435e-8681-050da7312951

5f52e75c-505f-9941-a9c4-da071f9836f0 is the existing drive on rpool (the internal drive)

9044761d-9a05-435e-8681-050da7312951 is the drive that is mirroring the internal drive

I then issued a sudo zpool resilver rpool command, then a sudo zpool scrub rpool command.

sudo zpool status now shows:

pool: bpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:00:03 with 0 errors on Tue Dec 6 15:37:50 2022
config:

NAME STATE READ WRITE CKSUM
bpool ONLINE 0 0 0
c0714ccb-bc6f-5a4c-80bd-46777d248b07 ONLINE 0 0 0
cache
112ff53e-6b95-874a-bb76-a2a3d1978ebf ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:11:41 with 0 errors on Tue Dec 6 15:49:35 2022
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
5f52e75c-505f-9941-a9c4-da071f9836f0 ONLINE 0 0 0
9044761d-9a05-435e-8681-050da7312951 ONLINE 0 0 0
cache
8bba85ae-6f8d-1f4a-9251-1082cbe4c197 ONLINE 0 0 0

errors: No known data errors

sudo zpool iostat -v shows:

                                        capacity     operations     bandwidth 

pool alloc free read write read write


bpool 258M 1.62G 0 0 0 0
c0714ccb-bc6f-5a4c-80bd-46777d248b07 258M 1.62G 0 0 0 0
cache - - - - - -
112ff53e-6b95-874a-bb76-a2a3d1978ebf 464K 28.7G 0 0 0 0


rpool 8.58G 911G 0 48 0 497K
mirror 8.58G 911G 0 48 0 497K
5f52e75c-505f-9941-a9c4-da071f9836f0 - - 0 25 0 250K
9044761d-9a05-435e-8681-050da7312951 - - 0 23 0 249K
cache - - - - - -
8bba85ae-6f8d-1f4a-9251-1082cbe4c197 3.77G 24.9G 0 0 0 0


I'll use the other 3TB of the hard drive space as storage and backup.

1 Like

I've been playing around with the computer... only when you know the limits of your machine do you know not to cross those limits... or something. :slight_smile:

So first I booted the Zorin OS USB stick and restored the .img file backup I'd done yesterday, just to be sure it works. It did.

Now I'm fiddling with compressing the .img file... all you have to do is start up Nautilus, right-click the file, select "Compress", then set up the compression options. You'll see a little icon with a circular progress indication pop up in the header. You can then close Nautilus, the file compression will continue in the background. I'll report back on how much space is saved by compressing backup files.

Alternatively, you can back up right to a compressed file, and restore directly from that compressed file...

To back up to compressed file:
sudo dd if=/dev/sda conv=sync,noerror bs=4096 status=progress | gzip -c> save path/printf "%(%F_%H-%M-%S)T.tar.gz"

The bolded bits you'd have to change to fit your scenario, specifically the device node of the drive you're backing up (ie: sda, sdb, sdc, etc.); and the path of the backup file you'll be creating. The file name will automatically be formatted as YYYY-MM-DD_HH-MM-SS.tar.gz (for example: 2022-12-09_14-22-54.tar.gz). That makes it easy to sort your backup files by date.

To restore from compressed file:
sudo dd if=path/backup.tar.gz status=progress | gunzip -c > /dev/sda

Again, the bolded bits you'd have to change to reflect: 1) the path to and name of your saved backup file, and: 2) the device node of the drive you're restoring to (ie: sda, sdb, sdc, etc.).

I have another idea... zero'ing the free space of the drive before doing the backup. I figure that the compression algorithm chunks all the zeros down to practically nothing, so in effect, you're creating something akin to a sparse file. That would mean that the size of the compressed file won't include all the sectors that have abandoned bits on them other than 0, so the compressed file should be really small.

I just have to find a program that'll zero the free space of all partitions, one after another, and can work with ZFS.

After I'd ascertained that the backups were actually protective, I booted the Zorin USB stick, then used:

sudo dd if=/dev/zero of=/dev/sdb1 bs=4096 status=progress
sudo dd if=/dev/zero of=/dev/sdb3 bs=4096 status=progress
sudo dd if=/dev/zero of=/dev/sdd1 bs=4096 status=progress
sudo dd if=/dev/zero of=/dev/sdf1 bs=4096 status=progress

... to zero (ie: completely wipe) the mirror drive, the second swap drive and the rpool and bpool L2ARC cache drives. Then I rebooted. It started up, sudo zpool status prompted me to replace the mirror drive and add the L2ARC cache drives back again, the mirror drive resilvered, I added the second swap drive again, issued a sudo zpool scrub rpool command, and all was well.

ZFS is almost literally unbreakable.

[EDIT]

The compression of the .img file backup of the main hard drive is complete. The .img file is 1,000,204,886,016 bytes, the compressed file is 28,541,191,012 bytes. That's a compression ratio of 35.04425816 : 1.

[/EDIT]

Another idea I had was to use squashfs (the same thing used on the Zorin OS USB stick) to backup the entire main drive... so even if your computer crashes, you can boot from a USB stick and you'll have exactly the same Zorin OS as you had on your hard drive. The neat thing about this is that squashfs is mountable (you could, for example, extract a file you need from the backup without having to do a full restore).

Ok, I'm booted off the Zorin OS USB stick. I went into the Drives application and ensured all the partitions on the main drive were unmounted, and I'm trying out the 'compress the drive straight to a file' option:

sudo dd if=/dev/sda conv=sync bs=131072 status=progress | gzip -c -9 > /media/zorin/Storage/2022-12-09.tar.gz

It is s...l...o...w, mainly because gzip is single-threaded. There's a multi-threaded gzip out there named pigz, but I couldn't find all the dependencies and get them to install from within the Zorin OS USB stick boot session. If we had pigz installed to the LiveBoot version of Zorin OS, backups of this sort would be much quicker.

I couldn't get the automatic file name code to work, I had to manually name the file. I'll work on it.

[EDIT]
Ah... it's s...l...o...w when it's chucking data, but when it hits sectors that have no data, it's just as fast as taking an uncompressed image of the drive. That tells me it's gzip's single-thread execution that's slowing it down. We need pigz (multi-threaded gzip) on the Zorin OS USB stick.

This is what I'm using now. It's given me the highest data transfer rate:
sudo dd if=/dev/sda conv=sync bs=512 oflag=sync,direct status=progress | gzip -c -9 > /media/zorin/Storage/2022-12-09.tar.gz

Ok, the backup using gzip is done. It took 3 hours 36 minutes for a 1 TB drive. The original drive size was 1,000,204,886,016 bytes, the compressed file is 39,953,358,850 bytes, giving a compression ratio of 25.034312879 : 1.

Not quite as compressed as the built-in tar.xz compression.

[EDIT]
The gzip chained to dd doesn't work. It makes a tar.gz file, but that file appears to be corrupted, it won't open. I'm now experimenting with 7zip (7z)... I'm still generating the file, but it's going fast and it's using all the CPUs.

[EDIT 2]
Ok, the backup with 7zip and dd chained together took 3 hours 28 minutes for a 1 TB drive; the source drive is 1,000,204,886,016 bytes, the compressed file is 28,479,992,539 bytes, giving a compression ratio of 35.119562782 : 1. The compressed file can be extracted, so that works.

Boot Zorin OS on USB stick. (Reboot, press F9, select Boot from CD or Boot from USB, whichever your Zorin OS is on.)

TO BACK UP:
sudo dd if=/dev/sda conv=sync bs=128M iflag=fullblock oflag=sync,direct status=progress | sudo 7z a -mx9 -bd -si -mmt12 /media/zorin/Storage/$(date +%Y_%m_%d-%H_%M)_sda.tar.7z

Why bs=128M? Because the ideal size is equal to the hard drive on-board cache, if you have sufficient memory to run with that large a bs. Default is 512 bytes. You'll have to change that to whatever on-board cache your hard drive has.

You'll have to manually change the drive node and path to your drive, and the drive node in the output file name (bolded above). You'll also have to change the -mmt setting to reflect the number of CPU cores your machine has (if you have HyperThreading enabled, double the number of physical cores; if you have HyperThreading disabled, the number of physical cores; if you want to limit how hard your processor is hit, then you can use fewer than that maximum, but it'll slow down the backup process).

TO EXTRACT:
sudo 7z x /media/zorin/Storage/2022_12_10-01_49_sda.tar.7z -mmt12 -o/media/zorin/Storage/

You'll have to manually change the path to the compressed file, and the path to save the uncompressed, extracted file to (bolded above).

You'll note I got the automatic file date / time thing working when backing up.

So the compressed file will be saved as, for example:
2022_12_10-01_49_sda.tar.7z

That's YYYY_MM_DD-HH_MM format, with the device node for the backed-up drive appended.

[EDIT 3]
I've changed the date format so it more closely reflects what the resultant file looks like when you image a drive in the Disks application.

sda_$(date +%Y-%m-%d_%H%M).tar.7z

Now it looks like: sda_YYYY-MM-DD_HHMM (ie: sda_2022-12-10_1445.tar.7z)

You'd change the 'sda' bit to reflect the device node of the drive you're backing up (ie: sdb, sdc, sdd, etc.), but otherwise leave the file name as it is. If you're only backing up a partition (rather than the entire drive), you could use, for instance, sda1 or sda2 or sda3, etc. That way you can sort the files by where they were backed up from, then by the date.

Wow, zeroing the sectors of your hard drive really makes a difference in the size of your compressed backups.

The drive is 1 TB, and the compressed image taken of that drive is only 7.6 GB... that's because 7z treats all the zeros on the hard drive sectors as it they're one, it just appends an index to where that string of zeros ends.

We've got to find some program that can zero the free space on ZFS drives. Perhaps an application that watches what data is relocated, then wipes the space that data previously occupied, running at Idle priority, perhaps started once a day via a cron job.

Huh, that didn't go as planned. When the .img file is extracted from the .img.7z file, it's 120 MB larger than when it went into that compressed file... which means it's 120 MB larger than the hard drive that .img file was taken from... which means you cannot restore the drive using that .img file.

I'm going to have to do more research to figure out how, exactly, 7z adds size to the file when it's unzipping it.

So the .img file was 1 TB, it was zipped to 7.6 GB (because I'd zero'd the free space on the drive), and it was unzipped to 1 TB + 120 MB.

Don't use any of the code in prior posts... it pads null data, and your resultant compressed file ends up decompressing to a larger size than when you started, so you can't use it to restore the hard drive.

It was the 'conv=sync' flag that did that.

I'm currently testing:

echo "NOTE: THIS SHOULD ONLY BE RUN WHEN BOOTED FROM THE ZORIN OS USB STICK AND WITH ALL SOURCE DRIVE PARTITIONS UNMOUNTED!"; echo "$(read -r -p "Press Enter to continue..." key)"; clear; echo "$(sudo df -h)"; echo "Are you certain that all partitions on the source drive are unmounted?"; echo "$(read -r -p "Press Enter to continue..." key)"; clear; echo "Backing up. Please wait..."; sleep 3; echo "Enter source drive (for instance: sda)..."; read source; echo "Enter destination path (for instance: /media/zorin/Storage/) ONLY... file is automatically named."; read destination; sudo dd if="/dev/"$source ibs=512 obs=20M iflag=fullblock,sync,nocache,nonblock,noatime oflag=sync,direct conv=noerror status=progress | sudo 7z a -mx9 -bd -si -mmt12 "$destination""$source""_""$(date +%Y-%m-%d_%H%M).img.7z"; echo "$(read -r -p "Backup complete. Press Enter to exit..." key)"; exit

I'm doing the backup and compress right now, then I'll extract the file and compare checksums. It should be noted that since I've split input and output block size to allow a larger output block size than input block size, this hits the CPU really hard... it's definitely a memory and CPU-constrained process now.

[EDIT]
The above works. The files are identical:

sda_2022-12-14_1616.img (backed up from /dev/sda via Disks application > Create Disk Image):
MD5:		deef20d7e1bff8d4ffc64714a1b49e20
SHA1:		3623ca5e5ac6bcd57b7237839328dc84bf93de0e
SHA256:	ec0923b04ba5f4db827cdf8705ac940f0dcff637d12ecdcdb494e133eb2d3ed2
CRC32:		22c383ed
Size:		1,000,204,886,016 bytes

2022-12-15_0219.img (backed up from /dev/sda via dd chained to 7z, then uncompressed from .7z file)
MD5:		deef20d7e1bff8d4ffc64714a1b49e20
SHA1:		3623ca5e5ac6bcd57b7237839328dc84bf93de0e
SHA256:	ec0923b04ba5f4db827cdf8705ac940f0dcff637d12ecdcdb494e133eb2d3ed2
CRC32:		22c383ed
Size:		1,000,204,886,016 bytes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.