Creating an Ubuntu 12.04 LTS VM on SmartOS

April 28th, 2012 Comments off

Joyent’s SmartOS, a fairly recent hypervisor for virtualization, provides a number of prebuilt virtual machine images for various operating systems but doesn’t yet have one available for Ubuntu 12.04 LTS.  This post provides the steps required to install it manually including the basic SmartOS commands as well as some Ubuntu-specific customizations.

  1. Create a JSON file with the virtual machine parameters.  Also see the SmartOS Wiki for some additional options and details.
    {
      "alias": "ubuntu12",
      "hostname": "ubuntu12",
      "brand": "kvm",
      "vcpus": 1,
      "autoboot": false,
      "ram": 4096,
      "resolvers": [ "10.1.1.1" ],
      "disks": [
        {
          "boot": true,
          "model": "virtio",
          "size": 40960
        }
      ],
      "nics": [
        {
          "nic_tag": "admin",
          "model": "virtio",
          "ip": "10.1.1.11",
          "netmask": "255.255.255.0",
          "gateway": "10.1.1.1",
          "primary": true
        }
      ]
    }
  2. Create the virtual machine with the vmadm command using the JSON file as the input.
    # vmadm create -f ubuntu-12.04.json
    Successfully created 3b202a79-f148-4c87-bb7f-ff9d64f724ca
  3. Copy the Ubuntu 12.04 LTS install ISO to the root dataset for this virtual machine. This assumes the ISO has already been copied to the /zones/isos directory on the SmartOS server.
    # cp /zones/isos/ubuntu-12.04-server-amd64.iso \
         /zones/3b202a79-f148-4c87-bb7f-ff9d64f724ca/root/.
  4. Boot the VM to the ISO.
    # vmadm start 3b202a79-f148-4c87-bb7f-ff9d64f724ca \
                  order=cd,once=d cdrom=/ubuntu-12.04-server-amd64.iso,ide
    Successfully started 3b202a79-f148-4c87-bb7f-ff9d64f724ca
  5. Get the VNC IP and port from the VM.
    # vmadm info 3b202a79-f148-4c87-bb7f-ff9d64f724ca vnc
    {
      "vnc": {
        "host": "10.1.1.4",
        "port": 63407,
        "display": 57507
      }
    }
  6. Connect to the VM with VNC and complete the installation as normal, including the reboot at the end.
  7. Enable the serial console in Ubuntu with the steps from the Ubuntu SerialConsoleHowto.  This step is optional but adds some nice administration benefits as demonstrated in the next step.
    1. Paste the following into /etc/init/ttyS0.conf.
      # ttyS0 - getty
      #
      # This service maintains a getty on ttyS0 from the point the system is
      # started until it is shut down again.
      
      start on stopped rc or RUNLEVEL=[2345]
      stop on runlevel [!2345]
      
      respawn
      exec /sbin/getty -L 115200 ttyS0 vt102
    2. Ask upstart to start the getty.
      $ sudo start ttyS0
  8. The serial console can then be connected to from the SmartOS host with the vmadm tool. Exiting from this console is done with CTRL-].
    # vmadm console 3b202a79-f148-4c87-bb7f-ff9d64f724ca
    Unknown command: console
    
    Ubuntu 12.04 LTS ubuntu12 ttyS0
    
    ubuntu12 login: ed
    Password:
    Last login: Sat Apr 28 22:38:41 CDT 2012 from 10.1.1.4 on pts/0
    Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
    
     * Documentation:  https://help.ubuntu.com/
    
      System information as of Sat Apr 28 22:39:26 CDT 2012
    
      System load:  0.0               Processes:           61
      Usage of /:   4.1% of 35.92GB   Users logged in:     1
      Memory usage: 1%                IP address for eth0: 10.1.1.11
      Swap usage:   0%
    
      Graph this data and manage this system at https://landscape.canonical.com/
    
    9 packages can be updated.
    0 updates are security updates.
    
    ed@ubuntu12:~$
Tags: , ,

Quick Change Tool Post Bolt

June 21st, 2011 Comments off

When I purchased my Grizzly G0602 10×22 metal lathe one of the first projects I had in mind was to mount an AXA-size quick change tool post on it.  Most of the methods I found online for doing this required either modifying the tool post mounting plate that came with the lathe or creating a suitable replacement with a threaded hole to fit with the quick change tool post bolt.  I didn’t want to modify the mounting plate and creating one from scratch would have been a challenge without a mill so instead opted to create a new tool post bolt.

To do this I started with a 1/2″ x 3.5″ bolt, turned down the end, and then threaded it with metric threads to fit the mounting plate.  In the end it turned out to be a quick and simple that has worked very well.

Tags: ,

Auditorium Lightning Effect

April 15th, 2011 Comments off

A few years back I was working on a production of Shadowlands for a high school and one of the scenes had thunder and lightning.  We were going to use some simple lighting for the lightning but I had some alternative ideas and spent some time experimenting with them.  The end result was a nice lightning effect that we used for the shows.

I initially started looking at strobe lights to create this effect due to their quick recharge time but quickly turned elsewhere.  There were a number of inexpensive units available but they were all geared towards DJ lighting and could only flash at a fixed frequency.  Not being too familiar with strobe lights like these I wasn’t sure how easy they would be to modify to accept an external trigger.

Next on the list were external camera flashes which can be externally triggered through the hot shoe mount.  The main downside was a 15 second recharge time on the flashes so multiple flash units were needed to have multiple flashes in quick succession.  There was a nice assortment of inexpensive camera flashes available, but to cut the bill to zero I was able to borrow 3 nice Sigma flashes from the journalism department of the school.  I wasn’t able to make any modifications to the borrowed flashes and without any hot shoes for them it was difficult to connect to the hot shoe contacts.

Like most external camera flashes, the Sigma flashes were able to be set in slave mode where they fire when another flash is detected.  I wasn’t sure at first how this worked but the transparent red cover over the sensor was a pretty strong hint that it was detecting the infrared light of the other flash.  Testing this was simple enough by just pointing any IR remote control at it and pressing a button and the flash would fire.

Auditorium Lightning Flashes

With the flashes and triggering method in place it was on to a control mechanism to trigger them.  I had plenty of AVR microcontrollers on hand and assembled a simple control circuit on a breadboard.  The main components were an ATMEGA168, pushbutton, and 4 LEDs.  For testing I initially started out with regular LEDs and once working these were swapped out with IR LEDs for triggering the flashes.

Auditorium Lightning Controller

The code for the AVR was very simple:  wait for the button to be pressed, pulse the LEDs in one of the predefined and timed pattern, repeat.  I ended up only using two flash patterns but the code is easily expanded to include additional ones.  DMX is the typical lighting control protocol but given the constraints for this project the simple pushbutton was used instead.  The full code that was used is shown below.

The flashes were mounted in the auditorium catwalks with two on the left and one on the right.  The IR LEDs were attached to the flashes with gaff tape and then connected to the breadboard with CAT5 cable.

In the end this was a successful project and worked well through all three performances.

#include <util/delay_basic.h>
#include <avr/io.h>

#define FLASH_MS 25
#define FLASH_RECHARGE_MS 15000

void delay_ms(uint16_t ms) {
  uint16_t a = 0;

  while (a < ms) {
    uint16_t d = ms - a;
    if (d > 200) {
      d = 200;
    }

    _delay_loop_2(d << 8);
    a += d;
  }
}

void adj_delay_ms(unsigned short ms) {
  delay_ms(ms - FLASH_MS);
}

void trigger_flash(unsigned char n) {
  PORTD |= _BV(n);
  delay_ms(FLASH_MS);
  PORTD &= ~_BV(n);
}

int main (void) {
  DDRD = 0xff; /* all outputs */
  PORTD = 0x00;

  char program = 0;

  while (1) {
    // wait for flashes to charge
    delay_ms(FLASH_RECHARGE_MS);

    // turn on "ready" LED
    PORTD |= _BV(PD3);

    while (PINC & _BV(PC4)) {
      // wait for button press
    }

    // turn off "ready" LED
    PORTD &= ~_BV(PD3);

    switch (program) {
    case 0:
      trigger_flash(0);
      adj_delay_ms(100);

      trigger_flash(1);
      adj_delay_ms(500);

      trigger_flash(2);
      break;
    case 1:
      trigger_flash(2);
      adj_delay_ms(400);

      trigger_flash(0);
      adj_delay_ms(140);

      trigger_flash(1);
      break;
    }

    program += 1;
    if (program > 1) {
      program = 0;     
    }          
  }     

  return (0);
}
Tags: ,

VMware ESXi 4.1 on Intel D510MO

April 10th, 2011 2 comments

I set up VMware ESXi 4.1 Update 1 on an Intel Desktop Board D510MO earlier this week and hit a few snags along the way.  This hardware is nowhere close to appearing in the VMware Compatibility Guide but some searching showed that a number of people had gotten it working.

My end goal was a diskless setup with ESXi booting from a USB flash drive with networked storage for the virtual machines.  The newest versions of ESXi allow installation to USB drives from the normal installer so I started off by booting to the install CD.  Even before I was prompted for any input I hit the following error:

vmkctl.HostCtlException Unable to load module /usr/lib/vmware/vkmod/vmfs3: Failure

Some searching turned up a number of responses to the effect of “use supported hardware” but for my purposes that’s not really what I had in mind.  Because others were successful with this board I suspected this was only an issue during the install but that it should work correctly once installed.

I next attempted an install from a virtual machine running in VirtualBox on a different machine.  I created the VM with a type of “Linux” and a subtype of “Linux 2.6 (64-bit)”.  After getting this setup I was able to successfully boot the installer and install ESXi to the USB flash drive.  I then moved the flash drive back to the D510 board and it booted successfully.

Once booted though I saw the second hurdle:  no NICs were found.  This board has a RealTek NIC that’s unsupported by ESXi.  I found that some users had created a driver package for ESXi that included the driver needed for the RealTek.  The older versions of this driver had numerous reports of network dropouts under load but the newer version 8.018 seems to work well.  I copied this file over the top of the oem.tgz file that was on the USB flash drive.

Once installed and with the proper NIC driver in place I finally had a working ESXi install on the D510 board.

Tags: , ,

Update $DISPLAY in Screen

December 12th, 2009 Comments off

I’ve been a heavy user of GNU Screen for a number of years.  My typical usage is to start a single screen session and attach to it as I move to different computers, either locally or via SSH.  At times I have a need to run X applications from a shell within screen, but with $DISPLAY set to the value screen was initially run with, this tends to not work after it is detached and attached to from a different location.

A few days ago, I came across allsh, a program that allows commands to be executed in all currently running shells.

I was curious if a similar method might work to fix my $DISPLAY issue.  Ideally, I wanted to be able to attach to the screen session from any location and have $DISPLAY updated in all of the bash subprocesses of screen to properly reflect the desired display.

The following is what I came up with to achieve this.

In ~/.profile:

TRAPUSR2() {
 [ -f ~/.screen-display ] && . ~/.screen-display
}

trap TRAPUSR2 USR2

# set the $DISPLAY variables if the shell is a child of screen
if [ "`ps -p $PPID -o comm | tail -1`" == "screen" ] ; then
 [ -f ~/.screen-display ] && . ~/.screen-display
fi

And in a file named attach, placed somewhere in your $PATH:

#!/bin/bash

echo "DISPLAY=\"$DISPLAY\"" > ~/.screen-display
echo "SSH_CLIENT=\"$SSH_CLIENT\"" >> ~/.screen-display
echo "SSH_CONNECTION=\"$SSH_CONNECTION\"" >> ~/.screen-display
echo "SSH_TTY=\"$SSH_TTY\"" >> ~/.screen-display
echo "XAUTHORITY=\"$XAUTHORITY\"" >> ~/.screen-display

# detect the pid of screen, but should be smarter if more than one
# instance is running
if [ `screen -ls | awk -F. '/tached/ { print $1 }' | wc -l` != "1" ] ; then
 echo "Unable to detect the desired screen session."
 exit 1
fi
SCREEN_PID=`screen -ls | awk '/tached/ { split($1, a, "."); print a[1] }'`

# find the pids of the shells that are children of screen
BASH_PIDS=`ps -e -o pid,ppid,comm | \
 awk '$2 == var1, $3 ~ /bash/ { print $1 }' var1=$SCREEN_PID`
kill -USR2 $BASH_PIDS

exec screen -d -r

With this setup, start screen as normal, and then to attach to it from another location, run attach.

Tags: , ,

Samba shadow_copy2 Enhancements

December 2nd, 2009 Comments off

A few weeks ago there was a thread on the Samba mailing list regarding some difficulties in getting my shadow copy patches to work with newer versions of Samba.  These patches were originally written for Samba 3.0.25, and since then, Samba has moved up to version 3.4.3, with the 3.5.0 release on the horizon.  The more recent Samba versions also include a shadow_copy2 module that will likely be replacing the shadow_copy module in the future.

I spent some time today adapting the original patches to the shadow_copy2 module.  This patch was made against Samba 3.4.3, and I will be working on a version for Samba 3.5.x over the next couple days.  I hope to get this integrated into Samba, but for now, it’s available below:

Creating a patched Samba source tree can be done with:

$ gzcat samba-3.4.3.tar.gz | tar -xf -
$ cd samba-3.4.3
$ gzcat ../samba-3.4.3-shadowcopy.patch.gz | patch -p1

The parameters added with this patch, as shown at the top of the source file, are:

shadow:sort = asc/desc, or blank for unsorted (default)

This is an optional parameter that specifies that the shadow
copy directories should be sorted before sending them to the
client.  This is beneficial for filesystems that don't read
directories alphabetically (e.g. ZFS).  If enabled, you typically
want to specify descending order.

shadow:format = <format specification for snapshot names>

This is an optional parameter that specifies the format
specification for the naming of snapshots.  The format must
be compatible with the conversion specifications recognized
by str[fp]time.  The default value is "@GMT-%Y.%m.%d-%H.%M.%S".

shadow:localtime = yes/no (default is no)

This is an optional parameter that indicates whether the
snapshot names are in UTC/GMT or the local time.

Example usage with ZFS for the [homes] share is:

[homes]
   comment = Home Directories
   browseable = no
   writable = yes
   vfs objects = shadow_copy2
   shadow: snapdir = .zfs/snapshot
   shadow: sort = desc
   shadow: localtime = yes
   shadow: format = %Y%m%d-%H%M%S

Where the snapshots would be taken with:

# zfs snapshot -r tank/home@`date +%Y%m%d-%H%M%S`

Recent versions of OpenSolaris allow ZFS snapshots to be created remotely over SMB/CIFS by simply creating a directory in the .zfs/snapshot subdirectory.  To see how this can be used, see my Windows Backups to ZFS post.  Though referring to the SMB/CIFS server built into OpenSolaris, the concept works equally as well with Samba and the shadow copy patch.

Tags: ,

ntfsprogs for Virtual Disk Partitions

November 29th, 2009 Comments off

The ntfsprogs package provides a nice set of tools for performing operations on NTFS file systems from non-Windows environments.  There are many uses for these, and I’ve found them helpful in virtualized environments when dealing with virtual disk images.  In particular, they allow for the easy restoration of individual files from NTFS virtual disks from the host OS.  These tools however, are only capable of operating on entire devices, and in many cases the individual partitions of virtual disk images are not exposed as block devices by the operating system, preventing these tools from working.

As a workaround for this, I’ve created a patch against ntfsprogs 2.0.0 that adds an --offset option to most of the tools, allowing a partition offset, in bytes from the start of the device, to be specified.

These patches were tested on OpenSolaris, but should work with other systems as well.  They include a Solaris patch to fix compilation issues on Solaris.  They are available in the following forms:

Compiling the tools can be done with:

$ wget http://www.edplese.com/files/ntfsprogs-2.0.0-offset.tar.gz
$ gzcat ntfsprogs-2.0.0-offset.tar.gz | tar -xf -
$ cd ntfsprogs-2.0.0-offset
$ ./configure && make

Once compiled, the tools can be installed with make install, or run in place from the ntfsprogs-2.0.0-offset/ntfsprogs directory without having to install them.

The following example demonstrates the tools operating on a snapshot of an NTFS volume stored on a ZFS zvol block device.

# lspart.py /dev/zvol/dsk/rpool/xvm/win2k8@installed
  Start Offset    Size  Type
       1048576  100.0M  07 Windows NTFS
     105906176   15.9G  07 Windows NTFS
             0    0.0B  00 Empty
             0    0.0B  00 Empty
# ntfsls /dev/zvol/dsk/rpool/xvm/win2k8@installed
Failed to startup volume: Invalid argument.
Failed to mount '/dev/zvol/dsk/rpool/xvm/win2k8': Invalid argument.
The device '/dev/zvol/dsk/rpool/xvm/win2k8' doesn't have a valid NTFS.
Maybe you selected the wrong device? Or the whole disk instead of a
partition (e.g. /dev/hda, not /dev/hda1)? Or the other way around?
# ntfsls --offset 1048576 /dev/zvol/dsk/rpool/xvm/win2k8@installed
Boot
bootmgr
BOOTSECT.BAK
System Volume Information
# ntfsls --offset 105906176 /dev/zvol/dsk/rpool/xvm/win2k8@installed
$Recycle.Bin
Documents and Settings
pagefile.sys
PerfLogs
Program Files
Program Files (x86)
ProgramData
Recovery
System Volume Information
Users
Windows
# ntfscat --offset 105906176 /dev/zvol/dsk/rpool/xvm/win2k8@installed \
          Windows/System32/notepad.exe > notepad.exe
Tags: , ,

ZFS Deduplication with NTFS

November 24th, 2009 3 comments

ZFS deduplication was recently integrated into build 128 of OpenSolaris, and while others have tested it out with normal file operations, I was curious to see how effective it would be with zvol-backed NTFS volumes.  Due to the structure of NTFS I suspected that it would work well, and the results confirmed that.

NTFS allocates space in fixed sizes, called clusters.  The default cluster size for NTFS volumes under 16 TB is 4K, but this can be explicitly set to different values when the volume is created.  For this test I stuck with the default 4K cluster size and matched the zvol block size to the cluster size to maximize the effectiveness of the deduplication.  In reality, for this test the zvol block size most likely had a negligible effect, but for normal workloads it could be considerable.

The OpenSolaris system was prepared by installing OpenSolaris build 127, installing the COMSTAR iSCSI Target, and then BFU‘ing the system to build 128.

The zpool was created with both dedup and compression enabled:

# zpool create tank c4t1d0
# zfs set dedup=on tank
# zfs set compression=on tank
# zpool list tank
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  19.9G   148K  19.9G     0%  1.00x  ONLINE  -

Next, the zvol block devices were created.  Note that the volblocksize option was explicitly set to 4K:

# zfs create tank/zvols
# zfs create -V 4G -o volblocksize=4K tank/zvols/vol1
# zfs create -V 4G -o volblocksize=4K tank/zvols/vol2
# zfs list -r tank
NAME              USED  AVAIL  REFER  MOUNTPOINT
tank             8.00G  11.6G    23K  /tank
tank/zvols       8.00G  11.6G    21K  /tank/zvols
tank/zvols/vol1     4G  15.6G    20K  -
tank/zvols/vol2     4G  15.6G    20K  -

After the zvols were created, they were shared with the COMSTAR iSCSI Target and then set up and formated as NTFS from Windows.  With only 4 MB of data on the volumes, the dedup ratio shot way up.

# zpool list tank
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  19.9G  3.88M  19.9G     0%  121.97x  ONLINE  -

The NTFS volumes were configured in Windows as disks D: and E:.  I started off by copying a 10 MB file and then a 134 MB file to D:.  The 10 MB file was used to offset the larger file from the start of the disk so that it wouldn’t be in the same location on both volumes.  As expected, the dedup ratio dropped down towards 1x as there was only a single copy of the files:

# zpool list tank
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  19.9G   133M  19.7G     0%  1.39x  ONLINE  -

The 134 MB file was then copied to E:, and immediately the dedup ratio jumped up.  So far, so good:  dedup works across multiple NTFS volumes:

# zpool list tank
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  19.9G   173M  19.7G     0%  2.26x  ONLINE  -

A second copy of the 134 MB file was copied to E: to test dedup between files on the same NTFS volume.  As expected, the dedup ratio jumped back up to around 3x:

# zpool list tank
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  19.9G   184M  19.7G     0%  3.19x  ONLINE  -

Though simple, these tests showed that ZFS deduplication performed well, and it conserved disk space within a single NTFS volume and also across multiple volumes in the same ZFS pool.  The dedup ratios were even a bit higher than expected which suggests that quite a bit of the NTFS metadata, at least initially, was deduplicated.

Windows Backups to ZFS

November 18th, 2009 Comments off

One of the methods I use for backing up Windows applications is to mirror the files to a ZFS file system using robocopy and then snapshot the file system to preserve its state.  I use this primarily for nightly backups and during application maintenance because it typically requires that the service be stopped for the duration of the backup.

There are a number of features about ZFS that makes it great for backups.  Among them are snapshots, compression, efficient incremental sending of file systems and block storage, etc.  Dedup will make its appearance in build 128 which will add further benefits as well.  All of these help to conserve disk space and speed up backup and restore operations.

This assumes a recent, working OpenSolaris system with the CIFS service already configured.  The latest version of OpenSolaris at this time is build 127.  For documentation on how to setup the CIFS service, see Getting Started With the Solaris CIFS Service.

To start off, create a parent file system for the backups.  The purpose of this file system is to allow properties to be set once and then be inherited by the descendant file systems created for the backup sets.  Enable both mixed case sensitivity and non-blocking mandatory locks to enhance compatibility between POSIX and Windows file semantics.  Set the sharesmb property to share the file system via CIFS and to shorten the names of the shares.  The name specified below turns into the backups_ prefix for the descendant file system share names.  Without it, the prefix would be the full file system path, in this case, tank_backups_.  In addition, allow the backup user access to create snapshots on the descendant file systems so that snapshots can be created by simply creating a directory from the script.

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=backups tank/backups
# zfs allow -d backup@edplese.com snapshot,mount tank/backups

With the initial setup completed, begin creating the backup sets.  Create a descendant file system under tank/backups for each backup set and give the backup user write access to it.  This is a simple example and it might be worthwhile to give other users read access to it as well or add more advanced ACLs to the file systems.

# zfs create tank/backups/someservice
# chown backup@edplese.com /tank/backups/someservice
# chmod 700 /tank/backups/someservice

Normally, I enable compression for the entire pool and then disable it for file systems that won’t see any benefit from it, such as those holding only multimedia files.  If compression isn’t inherited by the backup file systems, it might be beneficial to enable it on them.  Those that can spare performance for additional disk space might try gzip compression instead of the default lzjb.

# zfs set compression=on tank/backups

Finally, create a customized Windows batch file and set it to run automatically with the Windows Task Scheduler.

@echo off
set src="D:\Data\SomeService"
set dst="\\opensolaris\backups_someservice"
set service="SomeService"
set timestamp="%DATE:~10,4%%DATE:~4,2%%DATE:~7,2%-%TIME:~0,2%%TIME:~3,2%%TIME:~6,2%"
set timestamp="%timestamp: =0%"

net stop "%service%"
robocopy "%src%" "%dst%" /MIR
net start "%service%"
mkdir "%dst%\.zfs\snapshot\%timestamp%"

The script is straight-forward, and the only complicated lines are the timestamp ones.  Between the two of them they build a timestamp of the form YYYYMMDD-HHMMSS.  The second line fixes single digits that occur, replacing the leading spaces with zeros.

The last line is interesting in that instead of simply creating a directory, a ZFS snapshot is taken instead.

For restoration, navigate to the share on the server, right-click, select Properties, and then click on the Previous Versions tab.  From here, you can easily browse through the snapshots.  You can also right-click on individual files and then click on Restore previous version and it will only list the versions of the file that differ rather than displaying every snapshot.

Previous Versions

There are, or course, a number of ways to improve this backup scheme.  Here are a few to test out.

Take a look at some of the other options for robocopy.  There are a bunch to go through, but a couple notable ones are:

  • /B – Enable the backup privilege, allowing it to backup files it doesn’t normally have permission to.
  • /COPYALL – Copy NTFS security information including file ownership and the data and audit ACLs.

Instead of stopping the necessary services, create and mount a volume shadow copy of the NTFS volume and mirror from that location.

The flexibility for ZFS is astounding when it comes to backups and it’s amazing what a simple script can accomplish.