-- Leo's gemini proxy

-- Connecting to gmi.noulin.net:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

ZFS commands


Feed


date: 2023-11-30 09:26:54


categories: linux


firstPublishDate: 2021-06-27 12:21:41


The commands on this page are my notes reading

FreeBSD handbook ZFS chapter

,

OpenZFS - Root on ZFS

and

ZFS Debian wiki

.


Read my other post for reasons to use ZFS

ZFS in Debian Bullseye

and

Setting up root on ZFS

for installing Debian 11 (bullseye) with ZFS.


Another resource is

the Arch Wiki about zfs

.


Concepts


zpool manages pools

zfs manages datasets


pool - vdevs - disks
  `- datasets

Usage


fstab is not used.


Options:


man zfs
man zpool

create a pool


zpool create poolName /dev/da0

create a mirror pool


zpool create poolName mirror /dev/ada1 /dev/ada2

create raidz pool


zpool create poolName raidz /dev/da0 /dev/da1 /dev/da2

create a dataset


zfs create poolName/datasetName
zfs set compression=lz4 snapdir=visible poolName/datasetName
zfs set copies=2 compression=lz4 snapdir=visible poolName/datasetName

create a child dataset and all parent datasets


zfs create -p poolName/dataset1/dataset2/childName

mount/unmount dataset


zfs mount poolName/datasetName
zfs unmount poolName/datasetName

set mount point in tree


(all files have to be closed)


zfs mountpoint=/path poolName/datasetName

delete dataset


zfs destroy poolName/datasetName
delete snapshot
zfs destroy poolName/datasetName@snapshotName

delete pool


zpool destory poolName

create a dataset snapshot


zfs snapshot poolName/datasetName@snapshotName

The snapshots are available in `dataset mount point`/.zfs/snapshot/


create a recursive pool snapshot


zfs snapshot -r poolName@snapshotName

rollback to a snapshot


zfs rollback poolName/datasetName@snapshotName

check health


zpool status poolName
# show error details (file paths, errors in snapshots,...):
zpool status poolName -v
zpool status -x
all pools are healthy

when a disk is dead in a raidz pool


zpool offline storage da1
zpool replace storage da1
zpool status storage

data verification


zpool scrub poolName

adding and removing devices


zpool attach >> adds disks to a vdev
zpool add    >> adds vdevs to a pool

replace a functionning device


zpool replace poolName /dev/ada1p3 /dev/ada2p3

history


zpool history
zpool history poolName

show more info, like snapshot events:


zpool history -i

long history:


zpool history -l

performance monitoring


zpool iostat
zpool iostat -v

list datasets


zfs list

list snapshots


zfs list -t snapshot

list all


zfs list -t all

list a dataset and snapshots for this dataset


zfs list -rt all poolName/datasetName

list snapshots for a dataset


zfs list -rt snapshot poolName/datasetName

get compression ratio for a dataset


zfs get used,compressratio,compression,logicalused poolName/datasetName

get all options for a dataset


zfs get all poolName/datasetName

rename dataset


zfs rename poolName/datasetName poolName/newDatasetName

rename snapshot


zfs rename poolName/datasetName@snapshotName newSnapshotName

diff snapshot


diff snapshot to current


zfs diff poolName/datasetName@snapshotName

diff 2 snapshots


zfs diff poolName/datasetName@snapshotName poolName/datasetName@snapshotName2

set visible snapshot directory


get state:


zfs get snapdir poolName/datasetName
zfs set snapdir=visible poolName/datasetName
zfs set snapdir=hidden poolName/datasetName

clone snapshot to dataset


zfs clone poolName/datasetName@snapshotName poolName/newDatasetName

show original snapshot name:


zfs get origin poolName/newDatasetName

remove link with snapshot


zfs promote poolName/newDatasetName

send dataset with ssh


Setup:
+ host:
zfs allow -u user send,snapshot zroot
+ remote:
sysctl vfs.usermount=1
echo vfs.usermount=1 >> /etc/sysctl.conf
zfs create zroot/backup
zfs allow -u user create,mount,receive zroot/backup
# or zfs allow -u receiver compression,mountpoint,mount,create,receive rxpool
chown user /zroot/backup
zfs create zroot/backup/compressed
zfs set compression=lz4 snapdir=visible zroot/backup/compressed

+ command:
zfs send -vp zroot/compressed@e | ssh user@172.16.43.235 zfs recv -Fdv zroot/backup
-v verbose
-p send dataset properties

receive:
-F Force a rollback of the file system to the most recent snapshot
-d Use the full sent snapshot path without the pool name to determine the name of the new snapshot
-v verbose

send incremental backups


zfs send -vi zroot/compressed@oldSnapshot zroot/compressed@newSnapshot | ssh user@172.16.43.235 zfs recv -dvu zroot/backup
# zfs recv -Fdv also works

- send with ssh example
zfs snapshot zroot/compressed@initial
+ remote:
+ zfs create zroot/tank
+ zfs set compression=lz4 snapdir=visible zroot/tank
+ zfs allow -u user compression,mountpoint,mount,create,receive zroot/tank
+ chown user /zroot/tank
zfs send -v zroot/compressed@initial | ssh user@172.16.43.235 zfs recv -Fd zroot/tank
zfs snapshot zroot/compressed@evening
zfs send -vi initial zroot/compressed@evening | ssh user@172.16.43.235 zfs recv -Fd zroot/tank

create a pool and a dataset in an empty disk partition


zpool create \
    -o cachefile=/etc/zfs/zpool.cache \
    -o ashift=12 \
    -O acltype=posixacl -O snapdir=visible -O compression=lz4 \
    -O dnodesize=auto -O normalization=formD -O relatime=on \
    -O xattr=sa -O mountpoint=/home/data \
    sandisk /dev/disk/by-id/DISK-part1

Importing a disk


I moved a zfs disk from a computer to second one and I want mount the datasets in it. To do that use `zpool import` to list available disks with their ids and then run `zpool import $diskId` and mount the datasets with the `zfs` command.


zpool import
   pool: thedisk
     id: 15071621791720838351
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        thedisk     ONLINE
          ada0s1d   ONLINE
zpool import 15071621791720838351

hashtags: #zfs #linux


Feed

-- Response ended

-- Page fetched on Tue May 21 23:25:25 2024