Block Access Management

In my NetApp introduction section I spoke about 2 ways on accessing the NetApp filer, either file based access or block based access

File-Based Protocol NFS, CIFS, FTP, TFTP, HTTP
Block-Based Protocol Fibre Channel (FC), Fibre channel over Ethernet (FCoE), Internet SCSI (iSCSI)

In this section I will cover the following common based protocols, if any others are not covered then please checkout the documentation

I have another web page to cover File Access : NFS, CIFS, FTP, HTTP

Block Based Access

In iSCSI and FC networks, storage systems are targets that have storage target devices, which are referred to as LUNs, or logical units. Using the Data ONTAP operating system, you configure the storage by creating LUNs. The LUNs are accessed by hosts, which are initiators in the storage network. To connect to iSCSI networks, hosts can use standard Ethernet network adapters (NICs), TCP offload engine (TOE) cards with software initiators, or dedicated iSCSI HBAs. To connect to FC networks, hosts require Fibre Channel host bus adapters (HBAs).

Data ONTAP 7.2 added support for the Asymmetric Logical Unit Access (ALUA) features of SCSI, also known as SCSI Target Port Groups or Target Port Group Support. ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA allows the initiator to query the target about path attributes, such as primary path and secondary path. It also allows the target to communicate events back to the initiator. As a result, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required as long as the host supports the ALUA standard. For iSCSI SANs, ALUA is supported only with Solaris hosts running the iSCSI Solaris Host Utilities 3.0 for Native OS.

iSCSI Introduction

The iSCSI protocol is a licensed service on the storage system that enables you to transfer block data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined by RFC 3720. In an iSCSI network, storage systems are targets that have storage target devices, which are referred to as LUNs (logical units). A host with an iSCSI host bus adapter (HBA), or running iSCSI initiator software, uses the iSCSI protocol to access LUNs on a storage system. The iSCSI protocol is implemented over the storage system’s standard gigabit Ethernet interfaces using a software driver. The connection between the initiator and target uses a standard TCP/IP network. No special network configuration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, or it can be your regular public network. The storage system listens for iSCSI connections on TCP port 3260.

In an iSCSI network, there are two types of nodes: targets and initiators

Targets Storage Systems (NetApp, EMC)
Initiators Hosts (Unix, Linux, Windows)

Storage systems and hosts can be direct-attached or connected through Ethernet switches. Both direct-attached and switched configurations use Ethernet cable and a TCP/IP network for connectivity. You can of course use existing networks but if possible try to make this a dedicated network for the storage system, as it will increase performance.

Every iSCSI node must have a node name. The two formats, or type designators, for iSCSI node names are iqn and eui. The storage system always uses the iqn-type designator. The initiator can use either the iqn-type or eui-type designator.

iqn

The iqn-type designator is a logical name that is not linked to an IP address.

It is based on the following components:

  • The type designator itself, iqn, followed by a period (.)
  • The date when the naming authority acquired the domain name, followed by a period
  • The name of the naming authority, optionally followed by a colon (:)
  • A unique device name

The format is:
               iqn.yyyymm.backward-naming-authority:unique-device-name

Note:
yyyymm = month and year in which the naming authority acquired the domain name.
backward-naming-authority = the reverse domain name of the entity responsible for naming this device.
unique-device-name = a free-format unique name for this device assigned by the naming authority.

eui The eui-type designator is based on the type designator, eui, followed by a period, followed by sixteen hexadecimal digits.

The format is:
                                     eui.0123456789abcdef
Storage system node name Each storage system has a default node name based on a reverse domain name and the serial number of the storage system's non-volatile RAM (NVRAM) card.

The node name is displayed in the following format:
                                             
                                     iqn.1992-08.com.netapp:sn.serial-number

The following example shows the default node name for a storage system with the serial number 12345678:

                                     iqn.1992-08.com.netapp:sn.12345678

The storage system checks the format of the initiator node name at session login time. If the initiator node name does not comply with storage system node name requirements, the storage system rejects the session.

A target portal group is a set of network portals within an iSCSI node over which an iSCSI session is conducted. In a target, a network portal is identified by its IP address and listening TCP port. For storage systems, each network interface can have one or more IP addresses and therefore one or more network portals. A network interface can be an Ethernet port, virtual local area network (VLAN), or virtual interface (vif).

The assignment of target portals to portal groups is important for two reasons:

The Internet Storage Name Service (iSNS) is a protocol that enables automated discovery and management of iSCSI devices on a TCP/IP storage network. An iSNS server maintains information about active iSCSI devices on the network, including their IP addresses, iSCSI node names, and portal groups. You obtain an iSNS server from a third-party vendor. If you have an iSNS server on your network, and it is configured and enabled for use by both the initiator and the storage system, the storage system automatically registers its IP address, node name, and portal groups with the iSNS server when the iSNS service is started. The iSCSI initiator can query the iSNS server to discover the storage system as a target device. If you do not have an iSNS server on your network, you must manually configure each target to be visible to the host.

The Challenge Handshake Authentication Protocol (CHAP) enables authenticated communication between iSCSI initiators and targets. When you use CHAP authentication, you define CHAP user names and passwords on both the initiator and the storage system. During the initial stage of an iSCSI session, the initiator sends a login request to the storage system to begin the session. The login request includes the initiator’s CHAP user name and CHAP algorithm. The storage system responds with a CHAP challenge. The initiator provides a CHAP response. The storage system verifies the response and authenticates the initiator. The CHAP password is used to compute the response.

During an iSCSI session, the initiator and the target communicate over their standard Ethernet interfaces, unless the host has an iSCSI HBA. The storage system appears as a single iSCSI target node with one iSCSI node name. For storage systems with a MultiStore license enabled, each vFiler unit is a target with a different node name. On the storage system, the interface can be an Ethernet port, virtual network interface (vif), or a virtual LAN (VLAN) interface. Each interface on the target belongs to its own portal group by default. This enables an initiator port to conduct simultaneous iSCSI sessions on the target, with one session for each portal group. The storage system supports up to 1,024 simultaneous sessions, depending on its memory capacity. To determine whether your host’s initiator software or HBA can have multiple sessions with one storage system, see your host OS or initiator documentation. You can change the assignment of target portals to portal groups as needed to support multiconnection sessions, multiple sessions, and multipath I/O. Each session has an Initiator Session ID (ISID), a number that is determined by the initiator.

FC Introduction

FC is a licensed service on the storage system that enables you to export LUNs and transfer block data to hosts using the SCSI protocol over a Fibre Channel fabric. In a FC network, nodes include targets, initiators, and switches. Nodes register with the Fabric Name Server when they are connected to a FC switch.

Targets Storage Systems (NetApp, EMC)
Initiators Hosts (Unix, Linux, Windows)

Storage systems and hosts have adapters so they can be directly connected to each other or to FC switches with optical cable. For switch or storage system management, they might be connected to each other or to TCP/IP switches with Ethernet cable. When a node is connected to the FC SAN, it registers each of its ports with the switch’s Fabric Name Server service, using a unique identifier. Each FC node is identified by a worldwide node name (WWNN) and a worldwide port name (WWPN). WWPNs identify each port on an adapter.

WWPNs are used for the following purposes:

When the FCP service is first initialized, it assigns a WWNN to a storage system based on the serial number of its NVRAM adapter. The WWNN is stored on disk. Each target port on the HBAs installed in the storage system has a unique WWPN. Both the WWNN and the WWPN are a 64-bit address represented in the following format: nn:nn:nn:nn:nn:nn:nn:nn, where n represents a hexadecimal value. The storage system also has a unique system serial number that you can view by using the sysconfig command. The system serial number is a unique seven-digit identifier that is assigned when the storage system is manufactured. You cannot modify this serial number. Some multipathing software products use the system serial number together with the LUN serial number to identify a LUN.

You use the fcp show initiator command to see all of the WWPNs, and any associated aliases, of the FC initiators that have logged on to the storage system. Data ONTAP displays the WWPN as Portname. To know which WWPNs are associated with a specific host, see the FC Host Utilities documentation for your host. These documents describe commands supplied by the Host Utilities or the vendor of the initiator, or methods that show the mapping between the host and its WWPN. For example, for Windows hosts, use the lputilnt, HBAnywhere, or SANsurfer applications, and for UNIX hosts, use the sanlun command.

Getting the Storage Ready

I have discussed in detail how to create the following in my disk administration section:

Here's a quick recap

Once you set up the underlying aggregate, you can create, clone, or resize FlexVol volumes without regard to the underlying physical storage. You do not have to manipulate the aggregate frequently. You use either traditional or FlexVol volumes to organize and manage system and user data. A volume can hold qtrees and LUNs. A qtree is a subdirectory of the root directory of a volume. You can use qtrees to subdivide a volume in order to group LUNs. You create LUNs in the root of a volume (traditional or flexible) or in the root of a qtree, with the exception of the root volume. Do not create LUNs in the root volume because it is used by Data ONTAP for system administration. The default root volume is /vol/vol0.

Autodelete is a volume-level option that allows you to define a policy for automatically deleting Snapshot copies based on a definable threshold. Using autodelete is recommended in most SAN configurations.

You can set that threshold, or trigger, to automatically delete Snapshot copies when:

Two other things that you need to be aware of are Space Reservation and Fractional Reserve

Space Reservation When space reservation is enabled for one or more LUNs, Data ONTAP reserves enough space in the volume (traditional or FlexVol) so that writes to those LUNs do not fail because of a lack of disk space.
Fractional Reserve Fractional reserve is a volume option that enables you to determine how much space Data ONTAP reserves for Snapshot copy overwrites for LUNs, as well as for space-reserved files when all other space in the volume is used.

When provisioning storage in a SAN environment, there are several best practices to consider. Selecting and following the best practice that is most appropriate for you is critical to ensuring your systems run smoothly.

There are generally two ways to provision storage in a SAN environment:

In Data ONTAP, fractional reserve is set to 100 percent and autodelete is disabled by default. However, in a SAN environment, it usually makes more sense to use autodelete (and sometimes autosize).

When using fractional reserve, you need to reserve enough space for the data inside the LUN, fractional reserve, and snapshot data, or: X + X + Delta. For example, you might need to reserve 50 GB for the LUN, 50 GB when fractional reserve is set to 100%, and 50 GB for snapshot data, or a volume of 150 GB. If fractional reserve is set to a percentage other than 100%, then the calculation becomes more complex.

In contrast, when using autodelete, you need only calculate the amount of space required for the LUN and snapshot data, or X + Delta. Since you can configure the autodelete setting to automatically delete older snapshots when space is required for data, you need not worry about running out of space for data.

For example, if you have a 100 GB volume, you might allocate 50 GB for a LUN, and the remaining 50 GB is used for snapshot data. Or in that same 100 GB volume, you might reserve 30 GB for the LUN, and 70 GB is then allocated for snapshots. In both cases, you can configure snapshots to be automatically deleted to free up space for data, so fractional reserve is unnecessary.

LUN's, iGroups, LUN maps

When you create a LUN there are a number of items you need to know

The path name of a LUN must be at the root level of the qtree or volume in which the LUN is located. Do not create LUNs in the root volume. The default root volume is /vol/vol0, for example /vol/database/lun1.

The name of the LUN is case-sensitive and can contain 1 to 256 characters. You cannot use spaces. LUN names must use only specific letters and characters. LUN names can contain only the letters A through Z, a through z, numbers 0 through 9, hyphen (“-”), underscore (“_”), left brace (“{”), right brace (“}”), and period (“.”).

The LUN Multiprotocol Type, or operating system type, specifies the OS of the host accessing the LUN. It also determines the layout of data on the LUN, the geometry used to access that data, and the minimum and maximum size of the LUN. The LUN Multiprotocol Type values are solaris, solaris_efi, windows, windows_gpt, windows_2008 , hpux, aix, linux, netware, xen, hyper_v, and vmware. When you create a LUN, you must specify the LUN type. Once the LUN is created, you cannot modify the LUN host operating system type.

You specify the size of a LUN in bytes or by using specific multiplier suffixes (k, m, g, t).

The LUN description is an optional attribute you use to specify additional information about the LUN.

A LUN must have a unique identification number (ID) so that the host can identify and access the LUN. You map the LUN ID to an igroup so that all the hosts in that igroup can access the LUN. If you do not specify a LUN ID, Data ONTAP automatically assigns one.

When you create a LUN by using the lun setup command or FilerView, you specify whether you want to enable space reservations. When you create a LUN using the lun create command, space reservation is automatically turned on.

Initiator groups (igroups) are tables of FCP host WWPNs or iSCSI host nodenames. You define igroups and map them to LUNs to control which initiators have access to LUNs. Typically, you want all of the host’s HBAs or software initiators to have access to a LUN. If you are using multipathing software or have clustered hosts, each HBA or software initiator of each clustered host needs redundant paths to the same LUN. You can create igroups that specify which initiators have access to the LUNs either before or after you create LUNs, but you must create igroups before you can map a LUN to an igroup. Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you cannot map a LUN to multiple igroups that have the same initiator.

Host with HBA WWPN's igroups WWPN's added to igroups LUN's mapped to igroup
Linux1, single-path (one HBA)
10:00:00:00:c9:2b:7c:8f
linux-group0 10:00:00:00:c9:2b:7c:8f /vol/vol2/lun0
Linux2, multipath (two HBAs)
10:00:00:00:c9:2b:3e:3c
10:00:00:00:c9:2b:09:3c
linux-group1 10:00:00:00:c9:2b:3e:3c
10:00:00:00:c9:2b:09:3c
/vol/vol2/lun1

The igroup name is a case-sensitive name that must satisfy several requirements. Contains 1 to 96 characters. Spaces are not allowed. Can contain the letters A through Z, a through z, numbers 0 through 9, hyphen (“-”), underscore (“_”), colon (“:”), and period (“.”). Must start with a letter or number.

The igroup type can be either -i for iSCSI or -f for FC.

The ostype indicates the type of host operating system used by all of the initiators in the igroup. All initiators in an igroup must be of the same ostype. The ostypes of initiators are solaris, windows, hpux, aix, netware, xen, hyper_v, vmware, and linux. You must select an ostype for the igroup.

Finally we get to LUN mapping which is the process of associating a LUN with an igroup. When you map the LUN to the igroup, you grant the initiators in the igroup access to the LUN. You must map a LUN to an igroup to make the LUN accessible to the host. Data ONTAP maintains a separate LUN map for each igroup to support a large number of hosts and to enforce access control. Specify the path name of the LUN to be mapped. Specify the name of the igroup that contains the hosts that will access the LUN.

Assign a number for the LUN ID, or accept the default LUN ID. Typically, the default LUN ID begins with 0 and increments by 1 for each additional LUN as it is created. The host associates the LUN ID with the location and path name of the LUN. The range of valid LUN ID numbers depends on the host.

There are two ways to setup a LUN

LUN setup command

ontap1> lun setup

Note: the "lun setup" will display prompts that lead you through the setup process

Good old fashioned commandline

# Create the LUN
lun create -s 100m -t windows /vol/tradvol1/lun1

# Create the igroup, you must obtain the nodes identifier (my home pc is: iqn.1991-05.com.microsoft:xblade)
igroup create -i -t windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade

# Map the LUN to the igroup
lun map /vol/tradvol1/lun1 win_hosts_group1 0

The full set of commands for both lun and igroup are below

LUN configuration
Display lun show
lun show -m
lun show -v
Initialize/Configure LUNs, mapping

lun setup

Note: follow the prompts to create and configure LUN's

Create lun create -s 100m -t windows /vol/tradvol1/lun1
Destroy

lun destroy [-f] /vol/tradvol1/lun1

Note: the "-f" will force the destroy

Resize

lun resize <lun path> <size>

lun resize /vol/tradvol1/lun1 75m

Restart block protocol access lun online /vol/tradvol1/lun1
Stop block protocol access
lun offline /vol/tradvol1/lun1
Map a LUN to an initiator group

lun map /vol/tradvol1/lun1 win_hosts_group1 0
lun map -f /vol/tradvol1/lun2 linux_host_group1 1

lun show -m

Note: use "-f" to force the mapping

Remove LUN mapping lun show -m
lun offline /vol/tradvol1
lun unmap /vol/tradvol1/lun1 win_hosts_group1 0
Displays or zeros read/write statistics for LUN lun stats /vol/tradvol1/lun1
Comments lun comment /vol/tradvol1/lun1 "10GB for payroll records"
Check all lun/igroup/fcp settings for correctness lun config_check -v
Manage LUN cloning

# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010

# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1

Show the maximum possible size of a LUN on a given volume or qtree lun maxsize /vol/tradvol1
Move (rename) LUN lun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1
Display/change LUN serial number lun serial -x /vol/tradvol1/lun1
Manage LUN properties lun set reservation /vol/tradvol1/hpux/lun0
Configure NAS file-sharing properties lun share <lun_path> { none | read | write | all }
Manage LUN and snapshot interactions lun snap usage -s <volume> <snapshot>
igroup configuration
display igroup show
igroup show -v
igroup show iqn.1991-05.com.microsoft:xblade
create (iSCSI) igroup create -i -t windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
create (FC) igroup create -i -f windows win_hosts_group1 iqn.1991-05.com.microsoft:xblade
destroy igroup destroy win_hosts_group1
add initiators to an igroup igroup add win_hosts_group1 iqn.1991-05.com.microsoft:laptop
remove initiators to an igroup igroup remove win_hosts_group1 iqn.1991-05.com.microsoft:laptop
rename igroup rename win_hosts_group1 win_hosts_group2
set O/S type igroup set win_hosts_group1 ostype windows
Enabling ALUA igroup set win_hosts_group1 alua yes

Note: ALUA defines a standard set of SCSI commands for discovering and managing multiple paths to LUNs on Fibre Channel and iSCSI SANs. ALUA enables the initiator to query the target about path attributes, such as primary path and secondary path. It also enables the target to communicate events back to the initiator. As long as the host supports the ALUA standard, multipathing software can be developed to support any array. Proprietary SCSI commands are no longer required.

There are a number of iSCSI commands that you can use, I am not going to discuss iSCSI security (CHAPS or RADIUS), I will leave you to look at the doucmentation on this advanced topic.

display iscsi initiator show
iscsi session show [-t]
iscsi connection show -v
iscsi security show
status iscsi status
start iscsi start
stop iscsi stop
stats iscsi stats
nodename iscsi nodename

# to change the name
iscsi nodename <new name>
interfaces iscsi interface show

iscsi interface enable e0b
iscsi interface disable e0b
portals iscsi portal show

Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system's target IP addresses are the addresses of the interfaces used for the iSCSI protocol
accesslists iscsi interface accesslist show

Note: you can add or remove interfaces from the list

We have discussed how to setup a server using iSCSI but what if the server is using FC to connect to the NetApp.

A port set consists of a group of FC target ports. You bind a port set to an igroup, to make the LUN available only on a subset of the storage system's target ports. Any host in the igroup can access the LUNs only by connecting to the target ports in the port set. If an igroup is not bound to a port set, the LUNs mapped to the igroup are available on all of the storage system’s FC target ports. The igroup controls which initiators LUNs are exported to. The port set limits the target ports on which those initiators have access. You use port sets for LUNs that are accessed by FC hosts only. You cannot use port sets for LUNs accessed by iSCSI hosts.

All ports on both systems in the HA pairs are visible to the hosts. You use port sets to fine-tune which ports are available to specific hosts and limit the amount of paths to the LUNs to comply with the limitations of your multipathing software. When using port sets, make sure your port set definitions and igroup bindings align with the cabling and zoning requirements of your configuration

Port Sets
display

portset show
portset show portset1

igroup show linux-igroup1

create portset create -f portset1 SystemA:4b
destroy igroup unbind linux-igroup1 portset1
portset destroy portset1
add portset add portset1 SystemB:4b
remove portset remove portset1 SystemB:4b
binding igroup bind linux-igroup1 portset1
igroup unbind linux-igroup1 portset1
FCP service
display fcp show adapter -v
daemon status fcp status
start fcp start
stop fcp stop
stats

fcp stats -i interval [-c count] [-a | adapter]

fcp stats -i 1

target expansion adapters fcp config <adapter> [down|up]

fcp config 4a down
target adapter speed

fcp config <adapter> speed [auto|1|2|4|8]

fcp config 4a speed 8

set WWPN #

fcp portname set [-f] adapter wwpn

fcp portname set -f 1b 50:0a:09:85:87:09:68:ad

swap WWPN #

fcp portname swap [-f] adapter1 adapter2

fcp portname swap -f 1a 1b

change WWNN

# display nodename
fcp nodename

fcp nodename [-f]nodename

fcp nodename 50:0a:09:80:82:02:8d:ff

Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system.

WWPN Aliases - display

fcp wwpn-alias show
fcp wwpn-alias show -a my_alias_1
fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2

WWPN Aliases - create fcp wwpn-alias set [-f] alias wwpn

fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f
WWPN Aliases - remove

fcp wwpn-alias remove [-a alias ... | -w wwpn]

fcp wwpn-alias remove -a my_alias_1
fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2

Snapshots and Cloning

Data ONTAP provides a variety of methods for protecting data in an iSCSI or Fibre Channel SAN. These methods are based on Snapshot technology in Data ONTAP, which enables you to maintain multiple read-only versions of LUNs online per volume. Snapshot copies are a standard feature of Data ONTAP. A Snapshot copy is a frozen, read-only image of the entire Data ONTAP file system, or WAFL (Write Anywhere File Layout) volume, that reflects the state of the LUN or the file system at the time the Snapshot copy is created. The other data protection methods listed in the table below rely on Snapshot copies or create, use, and destroy Snapshot copies, as required.

The following table describes the various methods for protecting your data with Data ONTAP

Snapshot copy Make point-in-time copies of a volume.
SnapRestore
  • Restore a LUN or file system to an earlier preserved state in less than a minute without rebooting the storage system, regardless of the size of the LUN or volume being restored.
  • Recover from a corrupted database or a damaged application, a file system, a LUN, or a volume by using an existing Snapshot copy.
SnapMirror
  • Replicate data or asynchronously mirror data from one storage system to another over local or wide area networks (LANs or WANs).
  • Transfer Snapshot copies taken at specific points in time to other storage systems or near-line systems. These replication targets can be in the same data center through a LAN or distributed across the globe connected through metropolitan area networks (MANs) or WANs. Because SnapMirror operates at the changed block level instead of transferring entire files or file systems, it generally reduces bandwidth and transfer time requirements for replication.
SnapVault
  • Back up data by using Snapshot copies on the storage system and transferring them on a scheduled basis to a destination storage system.
  • Store these Snapshot copies on the destination storage system for weeks or months, allowing recovery operations to occur nearly instantaneously from the destination storage system to the original storage system.
SnapDrive for Windows or UNIX
  • Manage storage system Snapshot copies directly from a Windows or UNIX host.
  • Manage storage (LUNs) directly from a host.
  • Configure access to storage directly from a host. SnapDrive for Windows supports Windows 2000 Server and Windows Server 2003. SnapDrive for UNIX supports a number of UNIX environments.
Native tape backup and recovery Store and retrieve data on tape.
NDMP
(Network Data Management Protocol)
Control native backup and recovery facilities in storage systems and other file servers. Backup application vendors provide a common interface between backup applications and file servers.

A LUN clone is a point-in-time, writable copy of a LUN in a Snapshot copy. Changes made to the parent LUN after the clone is created are not reflected in the Snapshot copy. A LUN clone shares space with the LUN in the backing Snapshot copy. When you clone a LUN, and new data is written to the LUN, the LUN clone still depends on data in the backing Snapshot copy. The clone does not require additional disk space until changes are made to it. You cannot delete the backing Snapshot copy until you split the clone from it. When you split the clone from the backing Snapshot copy, the data is copied from the Snapshot copy to the clone, thereby removing any dependence on the Snapshot copy. After the splitting operation, both the backing Snapshot copy and the clone occupy their own space.

Use LUN clones to create multiple read/write copies of a LUN. You might want to do this for the following reasons:

Display clones snap list
create clone

# Create a LUN by entering the following command
lun create -s 10g -t solaris /vol/tradvol1/lun1

# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010

# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/lun1 tradvol1_snapshot_08122010

destroy clone

# display the snapshot copies
lun snap usage tradvol1 tradvol1_snapshot_08122010

# Delete all the LUNs in the active file system that are displayed by the lun snap usage command by entering the following command
lun destroy /vol/tradvol1/clone_lun1

# Delete all the Snapshot copies that are displayed by the lun snap usage command in the order they appear
snap delete tradvol1 tradvol1_snapshot_08122010

clone dependency

vol options <vol_name> <snapshot_clone_dependency> on
vol options <vol_name> <snapshot_clone_dependency> off

Note: Prior to Data ONTAP 7.3, the system automatically locked all backing Snapshot copies when Snapshot copies of LUN clones were taken. Starting with Data ONTAP 7.3, you can enable the system to only lock backing Snapshot copies for the active LUN clone. If you do this, when you delete the active LUN clone, you can delete the base Snapshot copy without having to first delete all of the more recent backing Snapshot copies.

This behavior in not enabled by default; use the snapshot_clone_dependency volume option to enable it. If this option is set to off, you will still be required to delete all subsequent Snapshot copies before deleting the base Snapshot copy. If you enable this option, you are not required to rediscover the LUNs. If you perform a subsequent volume snap restore operation, the system restores whichever value was present at the time the Snapshot copy was taken.

Restoring snapshot


snap restore -s payroll_lun_backup.2 -t vol /vol/payroll_lun

splitting the clone lun clone split start lun_path

lun clone split status lun_path
stop clone splitting lun clone split stop lun_path
delete snapshot copy snap delete vol-name snapshot-name

snap delete -a -f <vol-name>
disk space usage lun snap usage tradvol1 mysnap
Use Volume copy to copy LUN's vol copy start -S source:source_volume dest:dest_volume

vol copy start -S /vol/vol0 filerB:/vol/vol1

Disk Space Management

There are number of commands that let you see the disk space and manage it.

Disk space usage for aggregates aggr show_space
Disk space usage for volumes or aggregates df
The estimated rate of change of data between Snapshot copies in a
volume
snap delta

snap delta /vol/tradvol1 tradvol1_snapshot_08122010
The estimated amount of space freed if you delete the specified
Snapshot copies

snap reclaimable

snap reclaimable /vol/tradvol1 tradvol1_snapshot_08122010