FAST from EMC – Performance meet the quickening!

For those of you who know me (and even those who don’t) What is important to know is – I love innovation.  I especially love it when something is introduced which does the right thing while removing the need to think about things which frankly we DON’T need to be thinking about (Though ignorance aside, not making it so we cannot think nor take action on our own – thus action without the nanny effect – which is often seen by some announcements which think you can’t be trusted with your own investment!)

Looking at the particular challenge storage brings us – it’s always been a delicate balance of “What kind of storage do I put my APP on” “How do I meet SLA’s for the peak load” and of course “Whatever decision I make today is locked in stone for the next 3-5 years so I better design appropriately”.    If you disagree that these harp on the extremely delicate balance of App v Infrastructure please let me know you’re feelings :)

Now while I absolutely love to have those design conversations above – The time has finally come where we don’t need to have a doctorate in ‘application layout’ or get religion around IOPS Latency calculation workloads in order to accommodate a mixed application environment.   That has come through the creation of FAST by EMC.     FAST which is an acronym for “Fully Automated Storage Tiering” actually does what it says on the tin!   

Think about it for a moment.  What if I simply laid my applications out on disk and let the workload dictate what kind of storage my app should live on, and unless I have specific requirements, let my SLA’s really run the show.    This would take the complicated work of ‘figuring it out’ which frankly is an arduous task and leave that up to the deep analytics to figure it out – End result means you have more time to work on other projects and you start to give back and perform like never before.

But that is not to say this is infallible – Storage is almost as bad as the Database world, whereby people not only WANT control over what happens, when and why, but DEMAND it!  And this gives you that power.   I somewhat relate FAST to DRS from VMware – Let the system analyze what IS happening, and based upon past performance and utilization, predict what would be a good fit – And if you agree you can APPROVE the change the system has put forth.   Or if you have reached a point of being comfortable that it’s acting in your best interests – Allow it to automatically move data – People usually start off with DRS in a “Manual” approval mode, and then quickly roll into “automated” because if 99 suggestions the system made were good, there’s a good chance that 100th suggestion will be a good fit as well.

But just like DRS for VMware, there are exceptions: And it is in these exceptions that you have a POLICY defined to ensure that your will is enforced and things you don’t want to happen – DONT!

So lets get down to basics!  What does this mean for you and me?  

  • For once in our sad lives, we’ll be able to implement both FLASH and SATA into a traditional FC system and have the right disks spinning for the right apps.
    • Imagine it! Predictable workloads are EASY to assign to the right tier (sort of) but imagine those unpredictable apps, or even Month-end Apps!
      • Whoa! Are you saying I can take my somewhat stable monthly app which hits its Peak for month-end and move it around based upon the applications performance requirements?!   Just think about it – High IOPS, High Throughput, FAST latency response times – all the benefits of FLASH when it’s needed, but the cost of SATA when it isn’t.  
      • Next thing you’re going to tell me, I could be a seasonal business like a retailer or similar and shift my workload over to FLASH disk non-disruptively for the extreme peak workload, and then shift it back off to SATA when it’s not being hit quite so hard. :)
      • Oh and this means so much more, but it’s late and I want to publish this without overflowing you with information ;)

But this is far more than just simply allowing you to manage your dynamic workloads and ensure that the right storage is being used at the right time.  Across the stack this can be an enabler when it comes to times of legal discovery, long term data retention and archival, and fast response in situations of disputes or otherwise.  

Alright, but what does all of this mean, and why should I care? (read: Why are you so excited about it Christopher? :))

Active ESX Cluster Without FAST Same Cluster with Flash and FAST Policy
Active ESX Cluster without FAST Active ESX Cluster adding FLASH and applying a FAST Policy
384 Fibre Channel Disks
100% FC Disk
Disk resources are ~80% Busy
368 FC Disks, 16 Flash Disks
96% FC, 4% Flash
68% less disk I/O contention
2.5% faster disk response time

The little chart above is a basic breakdown of what you can very easily realize.  Those little images are called “HEAT Maps” If you can see the little legend on the left, the more RED something is, the more busy it is which means your disks are getting hit pretty hard (Notice how for the most part all of disks are either HOT or very HOT)  

What does this mean for me from an operational perspective? I didn’t have to get in loads of engineers and architects to sit around and say “How do you think we should lay out the data to best most efficient on these new 16 Flash Drives we added?” No.  The system analyzed the workload and over a couple of days came to a conclusion “This LUN will move from FC to Flash” and all of a sudden our performance started to shine, without having to take any outage, any downtime – Hell we didn’t even need to try to figure out what we should do – We could let it collect data and then advise us (since its algorithms know things about the operation of the system we can only guess about!)

What would have been even sweeter is if this example had SATA in the mix as well – Because then we’d have the question of what should get shifted from where to where! Take a look at this pretty straight forward workload chart showing which LUNs are more active than others

image Is this chart a guarantee that all environments look like this? Absolutely not.   I know of one specific heavy SAP environment which has a majority of its disk look like good flash targets and none of them look like a good fit for SATA.  However, a majority of environments DO have some things which likely aren’t on the most ideal of storage – And when you consider consolidation, that story only gets even more compelling.  

So, if you have a dedicated frame which is maxed out for a single app – You definitely want to consider FAST in the equation because it can help determine your best fit for FLASH, and if SATA is a player at all (in v1 of FAST) then excellent.

v2 of FAST will change all the rules

Though what I’m sure you like just as much as I do – is a real live example, so check out this Video, which was delivered at VMworld 2009!

And here we are, in a new era, a new level of sophistication the likes if you has never been seen before (Oh, there have been ‘attempts’ at producing solutions which are effectively ‘features’ but the full picture and depth of what today brings about – There is not a candle in the industry which can hold to this maelstrom!

Also, for reference – Here is the official Press Announcement from Today!

(One more Video!!!)

ReadyBoost does know boundaries!

No ReadyBoost when you have SSD!

So, I was talking with someone recently and noticed they had an SD card sticking in the SD slot on these Lenovo Laptops.    Specifically noticing that I asked him about it to which he replied, that he leveraged that to improve performance by utilizing ReadyBoost! I thought to myself “Wow, that’s a great idea, since it’s just a slot taking up space and often not being used! Why not do this myself!”   So I started offloading my data from this device and then while waiting for it to finish I impatiently went and checked my details for kicking off ReadyBoost and lo and behold, I get this image!    For those of you who know me, you know that I run my Lenovo T61p with Win7 and 2008R2Ent from BootfromVHD images which reside on an SSD.

Apparently, My SSD is so fast (even though it’s running from a VHD) that I cannot gain value from ReadyBoost!

 

Frankly, that’s pretty damn cool from where I’m standing! :)

Introducing RichCopy – your Robocopy replacement!

Alright, for those of you in the know, we all know that RichCopy isn’t anything new.  Infact it’s been being used internally at Microsoft for the past 10 years – however after a near decade of ‘Seriously?! We haven’t released this to the public?!’ it is indeed now available!   There’s a whole slew of details at others blogs RichCopy Build 4.0.216 has been posted to the Microsoft Download Center  and the TechNet Magazine article Utility Spotlight RichCopy and especially, be sure to visit the creator of this amazing tool Ken Tamaru’s blog!

So, what is this amazing tool?!  It’s free first of all! And allows you even more granular control over your copy processes, including multi-threaded copy operations!

As you can see here in this image, I’m copying a number of various sized files in different locations simultaneously!    Though one of the major perks is not in the initial copy, but instead in the situation of changed file copies (in our common “what’s changed incremental” model? :) Well, notes about that, taken verbatim from Ken’s blog show:

RichCopy 10 threads 

There must be many users who use RichCopy to copy only updated files. Most of users assign only 1 thread for directory search; however you can dramatically accelerate the performance of source and destination comparison by assigning multiple threads, especially this works well when files are distributed to multiple directories as RichCopy assign 1 thread to each directory search, not a tree.

Here is an example. (local to local, but different storage)

(1 million files in source and destination)
Thread # for Directory Search
1 about 10 minutes
2 about 6 minutes
4 about 2 minutes
8 about 1 minutes

RichCopy Options

Hands down, one of the coolest things is the level of options you can set.

The specific options I’d like to highlight are number of threads you can assign to specific operations.  That way you can increase not only the number of directories you traverse looking for files (or changed files) but also the number of files you can copy simultaneously!  This is a lifesaver when you’re copying many small files which when operating sequentially tends to take a lifetime!

 

I did run a number of speed tests on my machine, however speeds when run from my SSD tend to suffer, compared to if I were copying from say, one SAN to another SAN (something more realistic especially in a migration scenario)

 

In any sense, it’s a great tool, which operates via CLI or GUI and I’ll be sure to use or introduce it into my future migration opportunities as applicable :) Oh and be sure to click on any instance of “RichCopy” in this post in order to get a direct link to its download! :)

RTM-Weekend! Win7, 2008 R2, Boot from VHD and more!

Yay! It’s RTM Weekend! Alright, not for everyone, because as we all are patiently waiting for August 6th as RTM hits TechNet and MSDN, but I needed to get the jump on things because I think I’m busy next weekend!

So, what does RTM weekend entail for me?  Testing was the first ground.   Testing installations on my hardware, and getting a feel for how I’ll architect my deployment model for Win7 and 2008R2!

First things first – Create bootable VHD Images to run my OS out of.    Yes, I planned to deploy my systems via Boot from VHD, so I needed to create bootable images! And for this little decision, I opted to take advantage of WIM2VHD! So, what exactly is WIM2VHD?  Well, that’s pretty simple to explain!

The Windows(R) Image to Virtual Hard Disk (WIM2VHD) command-line tool allows you to create sysprepped VHD images from any Windows 7 installation source. VHDs created by WIM2VHD will boot directly to the Out Of Box Experience, ready for your first-use customizations. You can also automate the OOBE by supplying your own unattend.xml file, making the possibilities limitless.
Fresh squeezed, organically grown, free-range VHDs – just like Mom used to make – that work with Virtual PC, Virtual Server, Microsoft Hyper-V, and Windows 7’s new Native VHD-Boot functionality!

All you need in order to be successful with WIM2VHD is:

  • A computer running one of the following Windows operating systems:
    • Windows 7 Beta or RC (or RTM)
    • Windows Server 2008 R2 Beta or RC (or RTM)
    • Windows Server 2008 with Hyper-V RTM enabled (x64 only)
  • The Windows 7 RC Automated Installation Kit (AIK) or Windows OEM Pre-Installation Kit (OPK) installed.
  • A Windows 7 or Windows Server 2008 R2 installation source, or another Windows image captured to a .WIM file.

Then, simply execute a command like I did below and you’re moving along!

Create a bootable VHD of Windows 7 Ultimate
cscript WIM2VHD.WSF /wim:D:\sources\install.wim /sku:ultimate /VHD:C:\vhd\win7ult.vhd

Create a bootable VHD of Windows Server 2008 R2 Enterprise
cscript WIM2VHD.WSF /wim:D:\sources\install.wim /sku:serverenterprise /VHD:C:\vhd\R2Ent.vhd

This frankly takes care of most of the work on your  behalf! (Sure did for me!)

FYI: The image defaults to 40gb, so if you want to change that, use this flag /size:<vhdSizeInMb>

After this point all you need to do is bcdedit and make the system bootable and you’re set!

bcdedit /copy {current} /d “New VHD Description”
    bcdedit /copy {current} /d “Windows 7 Ultimate”
bcdedit /set <guid> device vhd=[driveletter:]\<directory>\<vhd filename>
    bcdedit /set {GUID} device vhd=[c:]\vhd\win7ult.vhd
bcdedit /set <guid> osdevice vhd=[driverletter:]\<directory>\<vhd filename>
    bcdedit /set {GUID} osdevice vhd=[c:]\vhd\win7ult.vhd
bcdedit /set <guid> detecthal on
    bcdedit /set {GUID} detecthal on

And you can perform those same exact steps again for your 2008 R2 VHD as well.   It’s not only pretty straight forward, but it’s so simple anyone can do it! After performing those steps I was up and running on a system which had no data, nothing, notta!

Now, to apply some context and depth to how I chose to use my deployment model.  I’m running on my personal Lenovo T61p, which I have a Kingston 128GB SSD disk inside of.   Because I wanted to have ‘some’ kind of Native OS in order to help work on anything should something go wrong, I opted for a 2008 Server R2 Enterprise (Core) installation.  That gives me a minimal foot print, yet an OS I can feel comfortable and confident in being able to work on and with!  

What this enables is my NOS which runs on the “C:” drive, and has a VHD directory where my images live.  However, when I’m booted into either of my BootFromVHD OS’s on here, the native SSD becomes the “D:” drive whereby I can share files between the two systems!   However, if you forget to copy something to the shared volume and need to access it, feel free to use the mount VHD feature in the Disk Management tool (or Storage in 2008)

image image

I personally prefer to mount it read-only because… I don’t want to take any risks, especially when it comes to “Anti-Virus” or other things. (Unless that is my specific intention)

Now that you have a working and operational system you’re good to go! And if you stick with a NativeOS for Maintenance reasons, you can use it to take hard backups of your VHD’s for migration to other hardware or general recovery to other points in time! (note: You can backup the un-used OS from your active OS if you’d like as well :))

So, hope you have a good RTM weekend coming up, I look forward to being able to generate and use my license keys come August 6th!