Thursday, July 17, 2014

Root Volume Dangerously Low on Space

So I wanted to walk through a customer's data migration plan in my home lab to brush up on some SnapMirror and FlexClone commands in clustered ONTAP. To do this I would need a functional cluster. Which to my knowledge, I had a several ONTAP simulators running clustered ONTAP, and a couple functional clusters. It's been a week or more since I logged into anything in the home lab and have been neglecting/neglected the usual administrative duties that go along with managing storage and virtual machines. So needless to say, before I could do what I really wanted to do, I needed  to clean up my mess, so to speak.

My single-node cluster was unavailable via System Manager, so I opened the node's console from VSphere and was welcomed with the following:


Notice the console message "The root volume (/mroot) on miamicl-01 is dangerously low on space; less than 10 MB remaining. To make space available, delete old snapshot copies, delete unneeded files, or expand the root volume capacity. After space is made available, reboot the controller. If needed contact technical support for assistance." And also the message where the controller calls home with the following: "Call home for ROOT VOLUME NOT WORKING PROPERLY: RECOVERY REQUIRED."

The first error message seems pretty self-explanatory. The second message sounds scary, but in this case its a quick fix. Free up some space in the root volume on miamicl-01 and we will be back in business in the "Miami office" which is critical. Luckily I have run into this before and even had people ask how to manage space in a node's root volume as it is not accessible from System Manager. As far as I know the only way to manage a nodes root volume is via the nodeshell CLI interface.

So let's login and free up some space, shall we? As soon as I login I get the same message as above about low space on root volume. From the prompt (which doesn't look like the normal cluster shell) I enter node shell via the following to drop to the nodeshell:

miamicl-01::> system node run local
Type 'exit' or 'Ctrl+D' to return to the CLI

Now I need to determine the name of the root volume so I use "vol status". In the output in the "options" column I find vol0 has the root attribute. 

I then use "snap list vol0" to identify all snapshots in volume vol0. And I use "snap delete vol0 "  to delete all snapshots, replacing "" with "nightly.x" or "hourly.x" as necessary until I have deleted all snapshots. Then I reboot the node, and it comes up clean, I login, and am returned to the normal and familiar clustershell.

Great! My single node cluster is back online and now I can begin preparations of creating SnapMirror destinations and other tasks I need to familiarize myself with. But, let's take it a step further so next week when I login to test/verify some action plans or commands we don't have to recover the root volume. Let's configure auto-deletion of snapshots and a target free space to prevent this moving forward. I can now log the output using Putty/SSH instead of the VSphere console. And it looks like this:

login as: admin
Using keyboard-interactive authentication.
Password:
miamicl::>
miamicl::> node run local
Type 'exit' or 'Ctrl-D' to return to the CLI
miamicl-01> snap autodelete vol0 on
snap autodelete: snap autodelete enabled
miamicl-01> snap autodelete vol0 target_free_space 35
snap autodelete: snap autodelete configuration options set
miamicl-01> snap autodelete vol0
snapshot autodelete settings for vol0:
state                           : on
commitment                      : try
trigger                         : volume
target_free_space               : 35%
delete_order                    : oldest_first
defer_delete                    : user_created
prefix                          : (not specified)
destroy_list                    : none
miamicl-01>

This should prevent the root volume from having less than 35% free space. Now I can create SnapMirror relationships from my 7-mode vsim to my cluster. That should be fun so grab a beer and stick around, though I'm running short of time and probably won't get around to documenting it until next week. 

Wednesday, July 2, 2014

iSCSI SVM to ESXi Host - Clustered ONTAP

Eventually, I want to spin up a bunch of small VMs (probably will use Ubuntu as an example for now) on one volume and enable deduplication, or A-SIS (Advanced Single Instance Storage) to do this on a very small amount of disk space. In my lab, I will provision an iSCSI LUN to my ESXi host to use as the datastore to house my VMs using the cluster of Vsims running Clustered ONTAP.

Later, I will dedup that volume allowing me to create twice as many VMs as one would think could be created in that amount of disk space.

First, I login to the cluster via System Manager and create my SVM for iSCSI. In the left pane of System Manager, expand "Storage Virtual Machines", select the cluster, and click "create" to open the SVM setup wizard. This should look very similar to previous posts. Enter a name for your SVM. In my lab, currently, I'm using FlexVol volumes. For data protocols, select "iSCSI". Leave the language as the default. Select a security style. Select an aggregate to house the SVM's root volume. If you've previously entered DNS configuration information, these boxes will already be filled out. Mine looks like this:


Click "Submit and continue".
You can then configure the LIFs for iSCSI. In this lab I only made one LIF per node.
You can then enter SVM administrator details.
At this point your new iSCSI SVM is created. However, you still need to provision a volume, an iSCSI LUN, and map it to an igroup containing your ESX host's IQN.
Here's my igroup with my one ESX host's IQN:


Here's my LUNs view from System Manager showing my LUN mapped to the "esx" igroup:
You should be able to go into vSphere Client and highlight our host in the left pane. Then click on the "configuration" tab and click "storage". Click "add storage". Select  "Disk/LUN" and click "Next". Highlight the NetApp LUN we just presented and click "Next". You should now have a new iSCSI datastore mounted by your ESX host.

In my lab I created a 16Gb LUN and presented it to my ESX host. I've created one Ubuntu VM and I am already down to 10Gb free space in this LUN. I will create another one or two of these Ubuntu VMs and then enable dedup on that iSCSI volume to see if we can save some space.