ZFS as a rather new filesystem offers various advantages in comparison to regular filesystems like ext3, ext4 and NTFS. We have summarized the most noticeable ones as follows:
The main benefit of using a ZFS filesystem is guaranteed data integrity
ZFS protects your data by enabling volume management on filesystem level. This feature makes “Copy on Write” (CoW) technology possible. When a block of data is altered, it will change its current location on the disk before the new write is finished. If your system crashes or loses power in the process, that data would be lost or damaged. ZFS does not change the location of the data until the write is completed and verified, thus keeping your data safe in case of a system crash. To verify data integrity, ZFS uses checksums to ensure that the data remains original from write to write. This means that every write is tested, which in turn eliminates bit rot. ZFS not only protects your data with the CoW feature, but offers additional RAID protection in comparison to standard RAID levels. RAID-Z3 allows for a maximum of three disk failures in a ZFS pool. Regular RAID only allows for two disk failures per volume. ZFS offers the ability to set up a multi-disk mirror (nRAID). Usually the RAID mirrors are composed of a single disk and its copy. With a multi-disk mirror you can have multiple copies, which adds levels of data integrity not found in typical RAID setups and is great for read speeds.
The storage capacity of ZFS is years ahead of what might become a problem soon for regular filesystems. The possible maximum of a ZFS storage pool is 6 EiB = 16 * 2^60 Byte, which is as much as 3,000,000 6TB HDDs. A configured ZFS pool can easily be changed in its size to accommodate a growing need for more storage. The pool can be upgraded step by step with larger disks, without compromising the filesystem or complicated procedures. Harddisks can even be added on different physical ports or in a changed order in a new computer system, as long as the ZFS version on the target system is the same or higher. You will be able to use your migrated data as soon as the import is completed.
ZFS also allows to send writes to individual physical disks, instead of just the RAID volume. Because of this, ZFS can stripe writes across RAID volumes, which is speeding up write performance. In the case you need to sync mirrors with only a little bit of information, you do not have to wait for it to sync any of the empty disk space, which can take a good amount of time. ZFS incorporates algorithms to ensure that your most recently used and most frequently used data are kept in the fastest system storage media. Spinning disks are known to be slow and SSD drives come at a very high price compared to regular disks. By using these algorithms in combination with flash-based ZFS write cache and L2ARC read cache devices, you can speed up your performance by up to 20% at low cost. Other great feature of ZFS are the intelligently designed snapshot, clone, and replication functions. ZFS snapshots only update based on what has changed since the last snapshot. This means that clone and replication tasks are less time consuming compared to traditional replication technology.
Easy to administer
Creating a new ZFS-Pool is fairly simple. The available storage devices can be listed with “rmformat” and can be created with the “zpool create -m /mountpunkt Contabo1 DEVICE” command. The new filesystem is automatically mounted and immediately accessible. There is no need to format the new ZFS-Pool. If additional storage space is needed, you can easily add a new device with the “zpool add Contabo1 DEVICE” command. This compares to the classical RAID 0 in which the data is distributed on all available devices.
In general a setup for data integrity is much more adviseable. With the “zpool create Contabo1 mirror DEVICE DEVICE” command you can easily create a ZFS-Pool with mirrored disks as in comparison to a classical RAID 1. You can also add several mirror disks to enhance data integrity even more with the “zpool create Contabo1 raidz DEVICE DEVICE DEVICE DEVICE” command for example. This will create a ZFS-Pool with four disks, in which one is allowed to fail without issues. When using the raidz2 option instead of raidz, two disks can fail at a time.
There is also the option to add Hot-Spares to a ZFS-Pool in order to have a replacement disk ready at all times. If a live disk fails the Hot-Spare will be used automatically to start a rebuild and take the place of the failed disk. This can be done with the “zpool add Contabo1 spare DEVICE” command, which will add the last disk in our example to the pool as Hot-Spare.
With “zpool list” you can review all existing ZFS-Pools with size, usage and health status.
With “man zfs” and “man zpool” you can review the commands that will give you full control over your ZFS-Pools.
A short overview of the mostly used commands:
– Creating a RAID-Z pool
zpool create NAME raidz DEVICE DEVICE DEVICE
– Creating a mirrored pool
zpool create NAME mirror DEVICE DEVICE
– Listing of available pools
– Show I/O for all pools
zpool iostat 1
– Show attributes of pool devices
– Add disk to a pool
zpool add NAME DEVICE
– Delete a pool
zpool destroy NAME
– Creating and mounting a ZFS-Pool
zfs create POOL/NAME
– List pool filesystems
– Creating and mounting a ZFS-Pool on a non-default mountpoint
zfs create POOL/NAME /MOUNTPOINT
– Create a snapshot of a filesystem
zfs snapshot POOL/[email protected]
– Mount a ZFS-Pool
zfs mount POOL/FILESYSTEM /MOUNTPOINT
– Delete a ZFS-Pool
zfs destroy POOL/NAME
Which operating systems are compatible with ZFS?
ZFS was initially designed for Solaris, but can be used today on FreeBSD, FreeNAS, Proxmox and most linux distributions.