Encrypt Zorin: LVM or ZFS

I want to encrypt my Zorin installation. During the installation process, it asks me whether to use LVM or ZFS. I'm not sure about the advantages and disadvantages and wasn't able to get a clear answer on the internet.
Can anyone recommend what to choose?

ZFS is a different type of file system entirely, and if you search the forum you'll find plenty of threads from @Mr_Magoo, documenting the adventures of installing ZFS on ZorinOS, and who can probably give you far more details than I ever could.

However, I personally prefer to lean on the safe side of things, and rely on tried and true methods. LVM has been around for a long time and I've used it without issues. On the other hand, ZFS was labeled as experimental on the ZorinOS 16 installer so it's relatively new.

Some people have had bad experiences with encrypting data as well, even though that hasn't been my case in years. Either way, make sure you have a decent backup strategy in case of hardware failure.

3 Likes

Basically, what zenzen said. :blush:

Also, keep in mind that ZFS does need quite a helping of RAM in order to function appropriately. Unless you have 32 or 64 GB of RAM available and do not mind dedicating half of it to ZFS, I would highly recommend staying away from that option.

1 Like

ZFS isn't a form of encryption, it's a form of data management... it is a file system that unifies the physical and logical aspects of disks... it is a file system and volume manager wrapped up into one.

ZFS can be encrypted, but it needn't be (I don't run my ZFS encrypted).

The advantage of ZFS is that you can treat the data as pools, as a liquid. With the system live, I can slosh all the data on one drive over to another drive, completely wipe the first drive, move all the data back onto the first drive and the system will keep running as though nothing happened.

It constantly checks all the data to be sure there's no bit-rot, and if you've got mirrored drives, it can repair that damaged data (it interfaces with SMART to mark bad sectors and move data off of bad sectors to replacement sectors). I have mirrored drives, so I can have a drive completely fail, the machine will still boot... I can remove that bad drive, drop in an identical drive, boot the machine, and the data will be rebuilt on that new drive. If I had a machine with hot-swap capabilities, I could swap out the bad drive and swap in the new drive without even shutting down the machine.

Want speed, but don't want to deal with the write-wearing of SSDs? Mirror several spinning-rust drives across several drive interfaces... it'll pull data equally from each interface and each drive to increase the read data throughput. Write data throughput will still be as fast as a single drive, though... which you can fix by mirroring several SLOG SSD drives across several drive interfaces. You'll wear out the drives over time, but the write speed will be fast. I'm searching for a drive that uses RAM sticks (with battery backup)... that'd be the ultimate SLOG drive. Mirroring several of those across several drive interfaces would give insanely fast write speeds and no write-wearing.

It is, in short, the most advanced, most data-safe form of storage known to date.

Here's my setup:

sudo zpool status                                                                                    # Show zpool Status
  pool: bpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:00:03 with 0 errors on Tue Dec 12 22:52:00 2023
config:

	NAME                                      STATE     READ WRITE CKSUM
	bpool                                     ONLINE       0     0     0
	  mirror-0                                ONLINE       0     0     0
	    cd2f0217-f65e-d64b-8e3a-d5622c27318c  ONLINE       0     0     0
	    2b7d5c58-794b-6c4a-8291-0070abe0957d  ONLINE       0     0     0
	    a4fa8f4d-8d99-7e4f-82fb-d4df4c210d9b  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:07:26 with 0 errors on Tue Dec 12 23:26:57 2023
config:

	NAME                                      STATE     READ WRITE CKSUM
	rpool                                     ONLINE       0     0     0
	  mirror-0                                ONLINE       0     0     0
	    19bf24c9-36d2-4e41-a24d-585afea57f6f  ONLINE       0     0     0
	    20159379-a613-b148-a09c-ecad76e64823  ONLINE       0     0     0
	    5cd72a1a-5e56-3642-94c1-1a893ad5210a  ONLINE       0     0     0

errors: No known data errors

'bpool' is the boot pool, and 'rpool' is the root pool. Each rpool drive is 495 GB, so it's able to chuck through that data at a maximum rate of 1.1 GB/s (495 GB / 446 seconds to perform a scrub).

4 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.