Posts Tagged ‘EFD’

FAST from EMC – Performance meet the quickening!

December 8th, 2009

For those of you who know me (and even those who don’t) What is important to know is – I love innovation.  I especially love it when something is introduced which does the right thing while removing the need to think about things which frankly we DON’T need to be thinking about (Though ignorance aside, not making it so we cannot think nor take action on our own – thus action without the nanny effect – which is often seen by some announcements which think you can’t be trusted with your own investment!)

Looking at the particular challenge storage brings us – it’s always been a delicate balance of “What kind of storage do I put my APP on” “How do I meet SLA’s for the peak load” and of course “Whatever decision I make today is locked in stone for the next 3-5 years so I better design appropriately”.    If you disagree that these harp on the extremely delicate balance of App v Infrastructure please let me know you’re feelings :)

Now while I absolutely love to have those design conversations above – The time has finally come where we don’t need to have a doctorate in ‘application layout’ or get religion around IOPS Latency calculation workloads in order to accommodate a mixed application environment.   That has come through the creation of FAST by EMC.     FAST which is an acronym for “Fully Automated Storage Tiering” actually does what it says on the tin!   

Think about it for a moment.  What if I simply laid my applications out on disk and let the workload dictate what kind of storage my app should live on, and unless I have specific requirements, let my SLA’s really run the show.    This would take the complicated work of ‘figuring it out’ which frankly is an arduous task and leave that up to the deep analytics to figure it out – End result means you have more time to work on other projects and you start to give back and perform like never before.

But that is not to say this is infallible – Storage is almost as bad as the Database world, whereby people not only WANT control over what happens, when and why, but DEMAND it!  And this gives you that power.   I somewhat relate FAST to DRS from VMware – Let the system analyze what IS happening, and based upon past performance and utilization, predict what would be a good fit – And if you agree you can APPROVE the change the system has put forth.   Or if you have reached a point of being comfortable that it’s acting in your best interests – Allow it to automatically move data – People usually start off with DRS in a “Manual” approval mode, and then quickly roll into “automated” because if 99 suggestions the system made were good, there’s a good chance that 100th suggestion will be a good fit as well.

But just like DRS for VMware, there are exceptions: And it is in these exceptions that you have a POLICY defined to ensure that your will is enforced and things you don’t want to happen – DONT!

So lets get down to basics!  What does this mean for you and me?  

  • For once in our sad lives, we’ll be able to implement both FLASH and SATA into a traditional FC system and have the right disks spinning for the right apps.
    • Imagine it! Predictable workloads are EASY to assign to the right tier (sort of) but imagine those unpredictable apps, or even Month-end Apps!
      • Whoa! Are you saying I can take my somewhat stable monthly app which hits its Peak for month-end and move it around based upon the applications performance requirements?!   Just think about it – High IOPS, High Throughput, FAST latency response times – all the benefits of FLASH when it’s needed, but the cost of SATA when it isn’t.  
      • Next thing you’re going to tell me, I could be a seasonal business like a retailer or similar and shift my workload over to FLASH disk non-disruptively for the extreme peak workload, and then shift it back off to SATA when it’s not being hit quite so hard. :)
      • Oh and this means so much more, but it’s late and I want to publish this without overflowing you with information ;)

But this is far more than just simply allowing you to manage your dynamic workloads and ensure that the right storage is being used at the right time.  Across the stack this can be an enabler when it comes to times of legal discovery, long term data retention and archival, and fast response in situations of disputes or otherwise.  

Alright, but what does all of this mean, and why should I care? (read: Why are you so excited about it Christopher? :))

Active ESX Cluster Without FAST Same Cluster with Flash and FAST Policy
Active ESX Cluster without FAST Active ESX Cluster adding FLASH and applying a FAST Policy
384 Fibre Channel Disks
100% FC Disk
Disk resources are ~80% Busy
368 FC Disks, 16 Flash Disks
96% FC, 4% Flash
68% less disk I/O contention
2.5% faster disk response time

The little chart above is a basic breakdown of what you can very easily realize.  Those little images are called “HEAT Maps” If you can see the little legend on the left, the more RED something is, the more busy it is which means your disks are getting hit pretty hard (Notice how for the most part all of disks are either HOT or very HOT)  

What does this mean for me from an operational perspective? I didn’t have to get in loads of engineers and architects to sit around and say “How do you think we should lay out the data to best most efficient on these new 16 Flash Drives we added?” No.  The system analyzed the workload and over a couple of days came to a conclusion “This LUN will move from FC to Flash” and all of a sudden our performance started to shine, without having to take any outage, any downtime – Hell we didn’t even need to try to figure out what we should do – We could let it collect data and then advise us (since its algorithms know things about the operation of the system we can only guess about!)

What would have been even sweeter is if this example had SATA in the mix as well – Because then we’d have the question of what should get shifted from where to where! Take a look at this pretty straight forward workload chart showing which LUNs are more active than others

image Is this chart a guarantee that all environments look like this? Absolutely not.   I know of one specific heavy SAP environment which has a majority of its disk look like good flash targets and none of them look like a good fit for SATA.  However, a majority of environments DO have some things which likely aren’t on the most ideal of storage – And when you consider consolidation, that story only gets even more compelling.  

So, if you have a dedicated frame which is maxed out for a single app – You definitely want to consider FAST in the equation because it can help determine your best fit for FLASH, and if SATA is a player at all (in v1 of FAST) then excellent.

v2 of FAST will change all the rules

Though what I’m sure you like just as much as I do – is a real live example, so check out this Video, which was delivered at VMworld 2009!

And here we are, in a new era, a new level of sophistication the likes if you has never been seen before (Oh, there have been ‘attempts’ at producing solutions which are effectively ‘features’ but the full picture and depth of what today brings about – There is not a candle in the industry which can hold to this maelstrom!

Also, for reference – Here is the official Press Announcement from Today!

(One more Video!!!)

Tags: , , , , , ,
Posted in Cloud, emc, SSD, Storage | Comments (8)

  • Archives