Posts Tagged ‘opensolaris’

ZFS Deduplication with NTFS

November 24th, 2009 3 comments

ZFS deduplication was recently integrated into build 128 of OpenSolaris, and while others have tested it out with normal file operations, I was curious to see how effective it would be with zvol-backed NTFS volumes.  Due to the structure of NTFS I suspected that it would work well, and the results confirmed that.

NTFS allocates space in fixed sizes, called clusters.  The default cluster size for NTFS volumes under 16 TB is 4K, but this can be explicitly set to different values when the volume is created.  For this test I stuck with the default 4K cluster size and matched the zvol block size to the cluster size to maximize the effectiveness of the deduplication.  In reality, for this test the zvol block size most likely had a negligible effect, but for normal workloads it could be considerable.

The OpenSolaris system was prepared by installing OpenSolaris build 127, installing the COMSTAR iSCSI Target, and then BFU‘ing the system to build 128.

The zpool was created with both dedup and compression enabled:

# zpool create tank c4t1d0
# zfs set dedup=on tank
# zfs set compression=on tank
# zpool list tank
tank  19.9G   148K  19.9G     0%  1.00x  ONLINE  -

Next, the zvol block devices were created.  Note that the volblocksize option was explicitly set to 4K:

# zfs create tank/zvols
# zfs create -V 4G -o volblocksize=4K tank/zvols/vol1
# zfs create -V 4G -o volblocksize=4K tank/zvols/vol2
# zfs list -r tank
tank             8.00G  11.6G    23K  /tank
tank/zvols       8.00G  11.6G    21K  /tank/zvols
tank/zvols/vol1     4G  15.6G    20K  -
tank/zvols/vol2     4G  15.6G    20K  -

After the zvols were created, they were shared with the COMSTAR iSCSI Target and then set up and formated as NTFS from Windows.  With only 4 MB of data on the volumes, the dedup ratio shot way up.

# zpool list tank
tank  19.9G  3.88M  19.9G     0%  121.97x  ONLINE  -

The NTFS volumes were configured in Windows as disks D: and E:.  I started off by copying a 10 MB file and then a 134 MB file to D:.  The 10 MB file was used to offset the larger file from the start of the disk so that it wouldn’t be in the same location on both volumes.  As expected, the dedup ratio dropped down towards 1x as there was only a single copy of the files:

# zpool list tank
tank  19.9G   133M  19.7G     0%  1.39x  ONLINE  -

The 134 MB file was then copied to E:, and immediately the dedup ratio jumped up.  So far, so good:  dedup works across multiple NTFS volumes:

# zpool list tank
tank  19.9G   173M  19.7G     0%  2.26x  ONLINE  -

A second copy of the 134 MB file was copied to E: to test dedup between files on the same NTFS volume.  As expected, the dedup ratio jumped back up to around 3x:

# zpool list tank
tank  19.9G   184M  19.7G     0%  3.19x  ONLINE  -

Though simple, these tests showed that ZFS deduplication performed well, and it conserved disk space within a single NTFS volume and also across multiple volumes in the same ZFS pool.  The dedup ratios were even a bit higher than expected which suggests that quite a bit of the NTFS metadata, at least initially, was deduplicated.

Windows Backups to ZFS

November 18th, 2009 Comments off

One of the methods I use for backing up Windows applications is to mirror the files to a ZFS file system using robocopy and then snapshot the file system to preserve its state.  I use this primarily for nightly backups and during application maintenance because it typically requires that the service be stopped for the duration of the backup.

There are a number of features about ZFS that makes it great for backups.  Among them are snapshots, compression, efficient incremental sending of file systems and block storage, etc.  Dedup will make its appearance in build 128 which will add further benefits as well.  All of these help to conserve disk space and speed up backup and restore operations.

This assumes a recent, working OpenSolaris system with the CIFS service already configured.  The latest version of OpenSolaris at this time is build 127.  For documentation on how to setup the CIFS service, see Getting Started With the Solaris CIFS Service.

To start off, create a parent file system for the backups.  The purpose of this file system is to allow properties to be set once and then be inherited by the descendant file systems created for the backup sets.  Enable both mixed case sensitivity and non-blocking mandatory locks to enhance compatibility between POSIX and Windows file semantics.  Set the sharesmb property to share the file system via CIFS and to shorten the names of the shares.  The name specified below turns into the backups_ prefix for the descendant file system share names.  Without it, the prefix would be the full file system path, in this case, tank_backups_.  In addition, allow the backup user access to create snapshots on the descendant file systems so that snapshots can be created by simply creating a directory from the script.

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=backups tank/backups
# zfs allow -d snapshot,mount tank/backups

With the initial setup completed, begin creating the backup sets.  Create a descendant file system under tank/backups for each backup set and give the backup user write access to it.  This is a simple example and it might be worthwhile to give other users read access to it as well or add more advanced ACLs to the file systems.

# zfs create tank/backups/someservice
# chown /tank/backups/someservice
# chmod 700 /tank/backups/someservice

Normally, I enable compression for the entire pool and then disable it for file systems that won’t see any benefit from it, such as those holding only multimedia files.  If compression isn’t inherited by the backup file systems, it might be beneficial to enable it on them.  Those that can spare performance for additional disk space might try gzip compression instead of the default lzjb.

# zfs set compression=on tank/backups

Finally, create a customized Windows batch file and set it to run automatically with the Windows Task Scheduler.

@echo off
set src="D:\Data\SomeService"
set dst="\\opensolaris\backups_someservice"
set service="SomeService"
set timestamp="%DATE:~10,4%%DATE:~4,2%%DATE:~7,2%-%TIME:~0,2%%TIME:~3,2%%TIME:~6,2%"
set timestamp="%timestamp: =0%"

net stop "%service%"
robocopy "%src%" "%dst%" /MIR
net start "%service%"
mkdir "%dst%\.zfs\snapshot\%timestamp%"

The script is straight-forward, and the only complicated lines are the timestamp ones.  Between the two of them they build a timestamp of the form YYYYMMDD-HHMMSS.  The second line fixes single digits that occur, replacing the leading spaces with zeros.

The last line is interesting in that instead of simply creating a directory, a ZFS snapshot is taken instead.

For restoration, navigate to the share on the server, right-click, select Properties, and then click on the Previous Versions tab.  From here, you can easily browse through the snapshots.  You can also right-click on individual files and then click on Restore previous version and it will only list the versions of the file that differ rather than displaying every snapshot.

Previous Versions

There are, or course, a number of ways to improve this backup scheme.  Here are a few to test out.

Take a look at some of the other options for robocopy.  There are a bunch to go through, but a couple notable ones are:

  • /B – Enable the backup privilege, allowing it to backup files it doesn’t normally have permission to.
  • /COPYALL – Copy NTFS security information including file ownership and the data and audit ACLs.

Instead of stopping the necessary services, create and mount a volume shadow copy of the NTFS volume and mirror from that location.

The flexibility for ZFS is astounding when it comes to backups and it’s amazing what a simple script can accomplish.