Filesystems

From kipiki
Jump to: navigation, search

This page deals with the following subjects for the 5/15/19 JaxLUG presentation, and may be expanded on further in the future:

Linux Software RAID

Linux software raid allows you to build a redundant array of disks without expensive hardware.
A good guide: https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm

RAID Levels
RAID 0 - Stripe only, no redundancy, minimum 2 disks
RAID 1 - Mirror only, minimum 2 disks
RAID 5 - Stripe with N+1 Failure resistance, minimum 3 disks
RAID 6 - Stripe with N+2 Failure resistance, minimum 4 disks
Combo RAIDS
RAID 10 - Stripe and Mirror, minimum 4 disks
RAID 61 - Stripe with N+2 Failure resistance + mirrored disk sets
Many more

Building and maintaining RAID with mdadm:

## On Debian, the package to install is mdadm
apt install mdadm

## First set up partitions for your RAID sub devices, the example setup of sdb using 'fd' type or 'linux raid autodetect':
root@jaxlug-deb:~# fdisk -l /dev/sdb
Disk /dev/sdb: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc47416ae

Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1        2048 16777215 16775168   8G fd Linux raid autodetect

## in this example we will be using mdadm to create a raid 5 using /dev/sdb1, c1, and d1:
root@jaxlug-deb:~# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 8383488K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

## If you check the status shortly after build you see it building, you can enable options to just build the array clean, but that is generally unwise
root@jaxlug-deb:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed May 15 15:17:37 2019
     Raid Level : raid5
     Array Size : 16766976 (15.99 GiB 17.17 GB)
  Used Dev Size : 8383488 (8.00 GiB 8.58 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed May 15 15:17:48 2019
          State : clean, degraded, recovering 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 28% complete

           Name : jaxlug-deb:0  (local to host jaxlug-deb)
           UUID : 09b8891f:5c2d61b8:799db1d3:481d71b6
         Events : 5

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       3       8       49        2      spare rebuilding   /dev/sdd1

## Another way of viewing the devices set up on the machine, more useful knowledge
root@jaxlug-deb:~# cat /proc/mdadm
cat: /proc/mdadm: No such file or directory
root@jaxlug-deb:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
      16766976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=========>...........]  recovery = 48.3% (4056540/8383488) finish=0.3min speed=202827K/sec
      
unused devices: <none>

## After a rebuild is complete and the device is clean:
root@jaxlug-deb:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed May 15 15:17:37 2019
     Raid Level : raid5
     Array Size : 16766976 (15.99 GiB 17.17 GB)
  Used Dev Size : 8383488 (8.00 GiB 8.58 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed May 15 15:18:20 2019
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : jaxlug-deb:0  (local to host jaxlug-deb)
           UUID : 09b8891f:5c2d61b8:799db1d3:481d71b6
         Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       3       8       49        2      active sync   /dev/sdd1

## Stopping a RAID device:
root@jaxlug-deb:~# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
root@jaxlug-deb:~# mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

## re-assembling the RAID device (you can also specify a lot more information if the scan cant find it):
root@jaxlug-deb:~# mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 3 drives.

LVM

Linux Volume Management allows you to slice up a single partition, or set of partitions into logical volumes without the difficulty in creating vast numbers of potentially conflicting real partitions, and allows ease in growing, shrinking, and moving those volumes.

LVM can be set up in a large variety of ways, supporting encryption, raid-like behaviour, and many other things, on this doc however we will only go over the more basic functionality.

Debian LVM Wiki: https://wiki.debian.org/LVM

LVM is built on the following main premise
PV - Physical Volume, these are the partitions that make up the physical storage of LVM
VG - Volume Group, these are virtual pools of storage built on top of PV's, they can use a single PV or multiple
LV - Logical Volume, this is a slice of the volume group, and is presented to the operating system as a IO device like a disk partition

Building and maintaining LVM

## On Debian, install the lvm2 package and start the service
apt install lvm2
## Also since Debian uses the systemd dumpsterfire, you need to deviate from the Debian wiki to start it... and fix systemd's setup of lvm2:
rm /lib/systemd/system/lvm2.service ## this points at dev null and will permablock systemd from starting lvm2
systemctl daemon-reload; systemctl start lvm2; systemctl status lvm2

## Now we can create our material:
## Creating and viewing a PV that uses our mdadm raid:
root@jaxlug-deb:~# pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created.
root@jaxlug-deb:~# pvdisplay /dev/md0
  "/dev/md0" is a new physical volume of "15.99 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/md0
  VG Name               
  PV Size               15.99 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               PgVozG-jJxP-zmeu-iy1V-Sa0Y-xZ09-u48efd

## Creating and viewing a VG that uses our PV that we just created
root@jaxlug-deb:~# vgcreate VG_test /dev/md0
  Volume group "VG_test" successfully created
root@jaxlug-deb:~# vgdisplay VG_test
  --- Volume group ---
  VG Name               VG_test
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               15.99 GiB
  PE Size               4.00 MiB
  Total PE              4093
  Alloc PE / Size       0 / 0   
  Free  PE / Size       4093 / 15.99 GiB
  VG UUID               XLvA8k-fLpR-PxH9-oWSh-xWc0-1Os0-x4Qayv
   

## Now you can see that PV /dev/md0 belongs to VG_test volume group and its fully allocated to it
root@jaxlug-deb:~# pvdisplay /dev/md0
  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               VG_test
  PV Size               15.99 GiB / not usable 2.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              4093
  Free PE               4093
  Allocated PE          0
  PV UUID               PgVozG-jJxP-zmeu-iy1V-Sa0Y-xZ09-u48efd

## Since we now have a VG that has space in it, we can slice it up into a few LVs
root@jaxlug-deb:~# lvcreate -n LV_test01 -L 100M VG_test
  Logical volume "LV_test01" created.
root@jaxlug-deb:~# lvcreate -n LV_test02 -L 100M VG_test
  Logical volume "LV_test02" created.

## Checking our status of the LV, we should specify the VG as well
root@jaxlug-deb:~# lvdisplay VG_test/LV_test01
  --- Logical volume ---
  LV Path                /dev/VG_test/LV_test01
  LV Name                LV_test01
  VG Name                VG_test
  LV UUID                tsVBy1-f7ZR-BXym-H7Fu-qHBf-8ck0-D91ud1
  LV Write Access        read/write
  LV Creation host, time jaxlug-deb, 2019-05-15 15:50:19 -0400
  LV Status              available
  # open                 0
  LV Size                100.00 MiB
  Current LE             25
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:0
   
root@jaxlug-deb:~# lvdisplay VG_test/LV_test02
  --- Logical volume ---
  LV Path                /dev/VG_test/LV_test02
  LV Name                LV_test02
  VG Name                VG_test
  LV UUID                nFSm0V-UBmM-V9Xp-6dUD-IayT-IVoP-rn1JRp
  LV Write Access        read/write
  LV Creation host, time jaxlug-deb, 2019-05-15 15:50:32 -0400
  LV Status              available
  # open                 0
  LV Size                100.00 MiB
  Current LE             25
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:1

## and our VG now shows its being used up a little bit:
root@jaxlug-deb:~# vgdisplay VG_test
  --- Volume group ---
  VG Name               VG_test
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               15.99 GiB
  PE Size               4.00 MiB
  Total PE              4093
  Alloc PE / Size       50 / 200.00 MiB
  Free  PE / Size       4043 / 15.79 GiB
  VG UUID               XLvA8k-fLpR-PxH9-oWSh-xWc0-1Os0-x4Qayv

## you can now create filesystems on the logical volumes we have created

LUKS

Linux Filesystem encryption using LUKS
Doc on installing on various flavors: https://linuxconfig.org/basic-guide-to-encrypting-linux-partitions-with-luks

Encryption in this fasion encrypts data AT REST. When your system has the volume mounted, it is decrypted and viewable to the system and anyone who has access.

## On Debian you need to install the cryptsetup system:
apt install cryptsetup

## First as a safety measure, we want to fill the volume with junk before encrypting it, this disallows someone from seeing where there is and isnt data, and instead makes it less obvious where the data is on the disk.  Over time this becomes less of a problem, but in a newly created volume it is prudent.  This will likely take a lot of time, ours is fast because its just 100M volume.  The bs=4M makes the blocks larger so writes are easier and faster:
root@jaxlug-deb:~# dd if=/dev/urandom of=/dev/mapper/VG_test-LV_test01 bs=4M
dd: error writing '/dev/mapper/VG_test-LV_test01': No space left on device
26+0 records in
25+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.44965 s, 72.3 MB/s

## Lets say we want to encrypt one of our new lvm volumes, here's how you would do such a thing:
root@jaxlug-deb:~# cryptsetup luksFormat /dev/mapper/VG_test-LV_test01 

WARNING!
========
This will overwrite data on /dev/mapper/VG_test-LV_test01 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase: <our password was testtest>
Verify passphrase: 

## now we need to 'open' the device, we are naming this new device test01enc
root@jaxlug-deb:~# cryptsetup luksOpen /dev/mapper/VG_test-LV_test01 test01enc
Enter passphrase for /dev/mapper/VG_test-LV_test01: <our testtest pass>

## now you can see there is a new device named test01enc, it acts like a normal device would where you could put a filesystem on there, then close it to 'unmount' the device.
root@jaxlug-deb:~# ls /dev/mapper
control  test01enc  VG_test-LV_test01  VG_test-LV_test02
root@jaxlug-deb:~# mkfs.ext4 /dev/mapper/test01enc 
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 100352 1k blocks and 25168 inodes
Filesystem UUID: 7ffdf39f-667a-4f80-976e-fc7c6060ce33
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

root@jaxlug-deb:~# mount /dev/mapper/test01enc /mnt
root@jaxlug-deb:~# df -h /mnt
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/test01enc   91M  1.6M   83M   2% /mnt

## simply unmount and 'close' the device when you are done, and the data is now safe.
root@jaxlug-deb:~# umount /mnt
root@jaxlug-deb:~# cryptsetup luksClose test01enc
root@jaxlug-deb:~# ls /dev/mapper
control  VG_test-LV_test01  VG_test-LV_test02

Filesystems

Filesystems reside on physical disks inside of partitions (or other devices as described above) and allow the OS to store data.
There are MANY filesystems to choose from, and ways of configuring each. Here is a good document on the differences: https://en.wikipedia.org/wiki/Comparison_of_file_systems

There are two main camps of filesytems in Linux, OSS and non-OSS, and you can have the filesystems loaded into the kernel using modules, direct, or fuse.
OSS - Open Source (or free sourced) filesystems. These generally perform very well in linux, and there is a lot of variety.
non-OSS - Many of these are reverse-engineered and may have compatibility issues, performance issues, or need to be loaded into the kernel via fuse (userspace) or building a kernel module for it, thus tainting the kernel with non-gpl code. The communities around these are passionate and hard working to create effective drivers for things such as NTFS that allow linux to run the filesystems very well even with shortcomings.
licence variation: For a filesystem to perform at its greatest, it has to be loaded into the kernel, to be provided by the kernel it MUST be GPL, as the code will be inside the kernel itself and distributed as such. You can also use 'fuse' to use userspace in the kernel for mounting the filesystem or build a kernel module and load it into the kernel using the module system, but at the same time tainting the kernel's gpl-ness.


Loading styles
direct loading (aka compiled in): this is where the driver is built directly into the kernel code, this requires you to use the kernel's driver, hence the driver must be GPL. Direct loading is required for initial OS booting (the /boot material)
module loading: this is where the driver is loaded into the kernel after initial boot, drivers compiled from kernel will be gpl compliant, drivers complied not from the kernel (compiled using kernel sources/headers with additional code) may or may not be gpl compliant, and can potentially be 'tainting' the kernel effecting its distribution. EG if you compile ZFS into a module, then distribute the version of linux you made using that, you would be holdent to both GPL and CDDL as you are using both code bases. Loading things as modules has significant advantages however, you can load and unload modules as long as they are unused, so if you have an updated zfs driver say, you could stop your zfs volumes, unload the module, update the module, and reload the new module, all without restarting.
FUSE (userspace): FUSE is a userspace tool that allows you to load filesystem modules into it, without directly tainting the kernel. It is a useful tool if you need to load a filesystem without compiling the driver for that specific kernel, however this method of usage is extremely inefficient compared to the other ones in terms of performance.
Main Filesystems that you will probably encounter
ext series (2/3/4) - https://en.wikipedia.org/wiki/Extended_file_system
ext2 - ext filesystem, no journal
ext3 - ext filesystem, with journal
ext4 - next gen ext filesystem
xfs - very fast filesystem by Silicon Graphics back in the day - https://en.wikipedia.org/wiki/XFS
btrfs - strives to be a gpl replacement to zfs - https://en.wikipedia.org/wiki/Extended_file_system
reiserfs - new-age filesystem who's development has mostly stalled - https://en.wikipedia.org/wiki/ReiserFS
zfs - zettabyte filesystem (CDDL License) - https://en.wikipedia.org/wiki/ZFS, OpenZFS - https://en.wikipedia.org/wiki/OpenZFS, good material on openzfs vs zfs: https://www.ixsystems.com/blog/zfs-vs-openzfs/

These filesystems all have their plusses and minuses, but BTRFS and ZFS have some of the coolest features (most of these features are off the top of my head bout ZFS, but BTRFS strives to be an OSS Replacement so has many of the same reatures):

  • Copy on Write (COW) - allows you to snapshot and keep or ship data
  • Data validation - allows you to ensure the data written is good, on a file-to-file basis (dependant on things like reliable memory and you should use registered memory with these for that reason)
  • Ability to do pooling like lvm, and raid like mdadm (do not use this on btrfs)
  • Compression - you can configure pools, or filesystems, to use a variety of compressions
  • DeDuplicaton - you can also configure it to use deduplicaton, however you need a TON of memory for this.
  • Multi-Device inclusion - you are able to add devices such as SSD's to increase random write (ZIL or ZFS Intent Log) and increase common read (log). ZFS will also use memory up on the system as read-caching further increasing read speed for common items (another reason to have good memory)

However they have pretty big pitfalls (most of these are from zfs, but likely the same for both)

  • When the filesystem is full, its full, you cant delete anything, you need to add space to the volume to be able to delete things
  • When the filesystems are increasingly full (85%+), the kernel has to look harder and harder for free space, and uses more aggressive algorithyms to do so, and becomes very poor performance wise
  • Sometimes has problems with fragmentation if used on a system that deals with tons of small files being moved in and out.
  • ZFS and OpenZFS interaction is strange and deep, there is a lot of material on whats going on, but suffice to say ZFS is CDDL and not compatible with Linux kernel. OpenZFS is a fork of the last public release of ZFS and is maintained in conjunction with the Illumos (previously OpenSolaris). A lot of code in OpenZFS has been replaced at this point, and ZFS has continued development inside Oracle. ZFS / OpenZFS are not necessarily compatible technically or philosophically.
  • BTRFS is still not mature on many options, and some things are exceedingly unstable such as raid and these options should be avoided.

Putting a filesystem on a device is usually pretty straight forward, but can become complex if you are trying to eek out every bit of performance on a system.

## example of building a simple ext4 filesystem on the lvm we previously created.
root@jaxlug-deb:~# mkfs.ext4 /dev/mapper/VG_test-LV_test02
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 102400 1k blocks and 25688 inodes
Filesystem UUID: 17d3ff75-324d-4cc5-b36b-c52f9b0be37b
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

root@jaxlug-deb:~# mount /dev/mapper/VG_test-LV_test02 /mnt
root@jaxlug-deb:~# df -h /mnt
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/VG_test-LV_test02   93M  1.6M   85M   2% /mnt
root@jaxlug-deb:~# mount | grep /mnt
/dev/mapper/VG_test-LV_test02 on /mnt type ext4 (rw,relatime,stripe=1024,data=ordered)

ZFS

This is my favorite filesystem, it does all kinds of things (see the previous section on ZFS/BTRFS features). It allows you to host a enterprise-esque material on pretty much any hardware. This said, there are numerous and deep pitfalls.

A few pitfalls to watch out for:

OpenZFS vs ZFS
OpenZFS is an open source branch of ZFS started at the last public version of ZFS, marked at version 5000 perpetually it's codebase has diverged from the still internally developed ZFS of Oracle. It performs pretty fantastically and I enjoy using it quite a lot
ZFS is the standard bearer, and stopped being publically available some time ago when Oracle bought Sun Microsystems. It is still being developed at Oracle and used for appliances and server filesystems to great effect.
Hardware
This filesystem was built for server-grade hardware, and things such as not having registered ecc memory on your system can be potentially devastating with silent corruption happening. That said I've run it for years without issue, but I've been lucky and should switch over to server-grade materials on my fileserver.
Deduplication
Dont, just dont. Unless you have a *specific* need such as 100's of VM's that use the same starter images, or lots of duplicated material, *and* you have boat-tonnes of memory, dont use this feature.
Partitions
Best practice is to deploy directly to disks without setting up partitions. This may lead unknowing users to think the disks are unused and destroy things

There are two main commands you will be using:

zpool - managing the pool (like LVM VG and PV)
zfs - managing filesystems (like LVM LV)

RAID - ZFS allows you to create raids like mdadm, however some are named differently, here's a quick reference to some:

(none) - like stripe/jbod
mirror - like RAID 1
raidz - like RAID 5
raidz2 - like RAID 6
raidz3 - like RAID 5/6 but with N+3

Building and maintaining zfs volumes and filesystems:

## Creating a pool with 3 disks, in a raidz (zfs raid5)
root@jaxlug-deb:~# zpool create zpool raidz -f /dev/sde /dev/sdf /dev/sdg
root@jaxlug-deb:~# zpool status
  pool: zpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	zpool       ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    sde     ONLINE       0     0     0
	    sdf     ONLINE       0     0     0
	    sdg     ONLINE       0     0     0

errors: No known data errors
root@jaxlug-deb:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
zpool  67.9K  15.4G  24.0K  /zpool

## creating and manipulating zfs filesystems is pretty easy:
root@jaxlug-deb:~# zfs create zpool/test01
root@jaxlug-deb:~# zfs create zpool/test02
root@jaxlug-deb:~# zfs set compression=on zpool/test01
root@jaxlug-deb:~# zfs set quota=40m zpool/test01
root@jaxlug-deb:~# zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
zpool          126K  15.4G  24.0K  /zpool
zpool/test01  24.0K  40.0M  24.0K  /zpool/test01
zpool/test02  24.0K  15.4G  24.0K  /zpool/test02

## and there are a lot of options to play with: 
root@jaxlug-deb:~# zfs get all zpool/test01
NAME          PROPERTY              VALUE                  SOURCE
zpool/test01  type                  filesystem             -
zpool/test01  creation              Wed May 15 17:10 2019  -
zpool/test01  used                  24.0K                  -
zpool/test01  available             40.0M                  -
zpool/test01  referenced            24.0K                  -
zpool/test01  compressratio         1.00x                  -
zpool/test01  mounted               yes                    -
zpool/test01  quota                 40M                    local
zpool/test01  reservation           none                   default
zpool/test01  recordsize            128K                   default
zpool/test01  mountpoint            /zpool/test01          default
zpool/test01  sharenfs              off                    default
zpool/test01  checksum              on                     default
zpool/test01  compression           on                     local
zpool/test01  atime                 on                     default
zpool/test01  devices               on                     default
zpool/test01  exec                  on                     default
zpool/test01  setuid                on                     default
zpool/test01  readonly              off                    default
zpool/test01  zoned                 off                    default
zpool/test01  snapdir               hidden                 default
zpool/test01  aclinherit            restricted             default
zpool/test01  canmount              on                     default
zpool/test01  xattr                 on                     default
zpool/test01  copies                1                      default
zpool/test01  version               5                      -
zpool/test01  utf8only              off                    -
zpool/test01  normalization         none                   -
zpool/test01  casesensitivity       sensitive              -
zpool/test01  vscan                 off                    default
zpool/test01  nbmand                off                    default
zpool/test01  sharesmb              off                    default
zpool/test01  refquota              none                   default
zpool/test01  refreservation        none                   default
zpool/test01  primarycache          all                    default
zpool/test01  secondarycache        all                    default
zpool/test01  usedbysnapshots       0                      -
zpool/test01  usedbydataset         24.0K                  -
zpool/test01  usedbychildren        0                      -
zpool/test01  usedbyrefreservation  0                      -
zpool/test01  logbias               latency                default
zpool/test01  dedup                 off                    default
zpool/test01  mlslabel              none                   default
zpool/test01  sync                  standard               default
zpool/test01  refcompressratio      1.00x                  -
zpool/test01  written               24.0K                  -
zpool/test01  logicalused           9.50K                  -
zpool/test01  logicalreferenced     9.50K                  -
zpool/test01  filesystem_limit      none                   default
zpool/test01  snapshot_limit        none                   default
zpool/test01  filesystem_count      none                   default
zpool/test01  snapshot_count        none                   default
zpool/test01  snapdev               hidden                 default
zpool/test01  acltype               off                    default
zpool/test01  context               none                   default
zpool/test01  fscontext             none                   default
zpool/test01  defcontext            none                   default
zpool/test01  rootcontext           none                   default
zpool/test01  relatime              off                    default
zpool/test01  redundant_metadata    all                    default
zpool/test01  overlay               off                    default