Hi! You might remember me from such blog posts as: EMC 20% Unified Storage Guarantee !EXPOSED! and the informational EMC Unified Storage Capacity Calculator – The Tutorial! – Well, here I’d like to bring to you the final word on this matter! (Well, my final word.. I’m sure well after I’m no longer discussing this… You will be, which is cool, I love you guys and your collaboration!)
Disclaimer: I am in no way saying I am the voice of EMC, nor am I assuming that Mike Richardson is infact the voice of NetApp, but I know we’re both loud, so our voices are heard regardless :)
So on to the meat of the ‘argument’ so to speak (That’d be some kind of vegan meat substitute being that I’m vegan!)
EMC Unified Storage Guarantee
I find it’d be useful if I quote the text of the EMC Guarantee, and then as appropriate drill down into each selected section in our comparable review on this subject.
It’s easy to be efficient with EMC.
EMC® unified storage brings efficiency to a whole new level. We’ve even created a capacity calculator so you can configure efficiency results for yourself. You’ll discover that EMC requires 20% less raw capacity to achieve your unified storage needs. This translates to superior storage efficiency when compared to other unified storage arrays—even those utilizing their own documented best practices.
If we’re not more efficient, we’ll match the shortfall
If for some unlikely reason the capacity calculator does not demonstrate that EMC is 20% more efficient, we’ll match the shortfall with additional storage. That’s how confident we are.
The guarantee to end all guarantees
Storage efficiency is one of EMC’s fundamental strengths. Even though our competitors try to match it by altering their systems, turning off options, changing defaults or tweaking configurations—no amount of adjustments can counter the EMC unified storage advantage.
Here’s the nitty-gritty, for you nitty-gritty types
- The 20% guarantee is for EMC unified storage (file and block—at least 20% of each)
- It’s based on out-of-the-box best practices
- There’s no need to compromise availability to achieve efficiency
- There are no caveats on types of data you must use
- There’s no need to auto-delete snapshots to get results
This guarantee is based on standard out-of-the-box configurations. Let us show you how to configure your unified storage to get even more efficiency. Try our capacity calculator today.
Okay, now that we have THAT part out of the way.. What does this mean? Why am I stating the obvious (so to speak) Let’s drill this down to the discussions at hand.
The 20% guarantee is for EMC unified storage (file and block—at least 20% of each)
This is relatively straight-forward. It simply says “Build a Unified Configuration – which is Unified” SAN is SAN, NAS is NAS, but when you combine them together you get a Unified Configuration! – Not much to read in to that. Just that you’re likely to see the benefit of 20% or greater in a Unified scenario, than you are in a comparable SAN or NAS only scenario.
It’s based on out-of-the-box best practices
I cannot stress this enough. Out-Of-Box Best practices. What does that mean? Universally, I can build a configuration which will say to this “20% efficiency guarantee” Muhahah! Look what I did! I made this configuration which CLEARLY is less than 20%! Even going into the negative percentile! I AM CHAMPION GIVE ME DISK NOW!". Absolutely. I’ve seen it, and heard it touted (Hey, even humor me as I discuss a specific use-case which me and Mike Richardson have recently discussed.) But building a one-off configuration which makes your numbers appear ‘more right’ v using your company subscribed best practices (and out of box configurations) is what is being proposed here. If it weren’t for best practices we’d have R0 configurations spread across every workload, with every feature and function under the sun disabled to say ‘look what I can doo!”
So, I feel it is important to put this matter to bed (because so many people have been losing their time and sleep over this debate and consideration) I will take this liberty to quote from a recent blog post by Mike Richardson – Playing to Lose, Hoping to Win: EMC’s Latest Guarantee (Part 2) In this article written by Mike he did some –great- analysis. We’re talking champion. He went through and used the calculator, built out use-cases and raid groups, really gave it a good and solid run through (which I appreciate!) He was extremely honest, forthright and open and communicative about his experience, configuration and building this out with the customer in mind. To tell you the truth, Mike truly inspired me to follow-up with this final reprise.
Reading through Mike’s article I would like to quote (in context) the following from it:
The configuration I recommend is to the left. With 450GB FC drives, the maximum drive count you can have in a 32bit aggr is 44. This divides evenly into 2 raidgroups of 20+2. I am usually comfortable recommending between 16 and 22 RG size, although NetApp supports FC raidgroup sizes up to 28 disks. Starting with the same amount of total disks (168 – 3 un-needed spares), the remaining disks are split into 8 RAID DP raidgroups. After subtracting an additional 138GB for the root volumes, the total usable capacity for either NAS or SAN is just under 52TB.
I love that Mike was able to share this image from the Internal NetApp calculator tool (It’s really useful to build out RG configurations) and it gives a great breakdown of disk usage.
For the sake of argument for those who cannot make it out from the picture, what Mike has presented here is a 22 disk RAID-DP RG (20+2 disks – Made up of 168 FC450 disks with 7 spares) I’d also like to note that snapshot reserve has been changed from the default of 20% to 0% in the case of this example.
Being I do not have access to the calculator tool which Mike used, I used my own spreadsheet run calculator which more or less confirms what Mike’s tool is saying to be absolutely true! But this got me thinking! (Oh no! Don’t start thinking on me now!) And I was curious. Hey, sure this deviates from best practices a bit, right? But BP’s change at times, right?
So being that I rarely like to have opinions of my own, and instead like to base it on historical evidence founded factually and referenced in others… I sent the following txt message to various people I know (Some Former Netappians’s, some close friends who manage large scale enterprise NetApp accounts, etc (etc is for the protection of those I asked ;))
The TXT Message was: “Would you ever create a 20+2 FC RG with netapp?”
That seems pretty straight forward. Right? Here is a verbatim summation of the responses I received.
- Sorry, I forgot about this email. To be brief, NO.
- “It depends, I know (customer removed) did 28, 16 is the biggest I would do”
- I would never think to do that… unless it came as a suggestion from NetApp for some perfemance reasons… (I blame txting for typo’s ;))
- Nope we never use more then 16
- Well rebuild times would be huge.
So, sure this is a small sampling (of the responses I received) but I notice a resonating pattern there. The resounding response is a NO. But wait, what does that have to do with a hole in the wall? Like Mike said, NetApp can do RG sizes of up to 28 disks. Also absolutely 100% accurate, and in a small number of use-cases I have found situations in which people have exceeded 16 disk RG’s. So, I decided to do a little research and see what the community has said on this matter of RG sizes. (This happened out of trying to find a Raid6 RG Rebuild Guide – I failed)
I found a few articles I’d like to reference here:
- Okay, maybe not the best reference considering I was fairly active in the response on the subject in July and August of 2008 in this particular thread. Though read through it if you like, I guess the best take away I can get from it (which I happened to have said…)
- I was looking at this from two aspects: Performance, and long-term capacity.
- My sources for this were a calculator and capacity documents.
- Hopefully this helped bring some insight into the operation and my decisions around it.
- (Just goes to show… I don’t have opinions… only citeable evidence Well, and real world customer experiences as well;))
- Raid group size with FAS3140 and DS4243
- I found this in the DS4243 Disk Shelf Technical FAQ document
- WHAT ARE THE BEST PRACTICES FOR CONFIGURING RAID GROUPS IN FULLY LOADED CONFIGURATIONS?
- For one shelf: two RAID groups with maximum size 12. (It is possible in this case that customers will configure one big RAID group of size 23–21 data and 2 parity; however, NetApp recommends two RAID groups).
- Managing performance degradation over time
- Aggregate size and "overhead" and % free rules of thumb.
- Why should we not reserve Snap space for SAN volumes?
- All around good information, conversation and discussion around filling up Aggr’s – No need to drill down to a specific point.
So, what does all of this mean other than the fact that I appear to have too much time on my hands? :)
Well, to sum up what I’m seeing and considering we are in the section titled ‘out of box best practices’
- Best Practices and recommendations (as well as expert guidance and general use) seem to dictate a 14+2, 16 disk RG
- Can that number be higher. Yes, but that would serve to be counter to out-of-box best practices, not to mention it seems your performance will not benefit as seen in the comments mentioned above (and the fact that spec.org tests are run in that model)
- By default the system will have a reserve, and not set to 0% – so if I were to strip out all of the reserve which is there for a reason – my usable capacity will go up in spades, but I’m not discussing a modified configuration; I’m comparing against a default, out-of-box best practices configuration, which by default calls for a 5% aggr snap reserve, 20% vol snap reserve for NAS and a SAN Fractional Reserve of 100%
- Default Snapshot reserve, and TR-3483 helps provide backing information and discussion around this subject. (Friendly modifications from Aaron Delp’s NetApp Setup Cheat Sheet)
- In order to maintain these ‘out of box best practices’ and enable for a true model of thin provisioning (albeit, not what I am challenging here, especially being that Mike completely whacked the reserve space for snapshots – Nonetheless… in our guarantee side of the house we have the ‘caveat’ of “There’s no need to auto-delete snapshots to get results” – Which is simply saying, Even if you were to have your default system out of box, in order to achieve, strive and take things to the next level you would need to enable “Volume Auto-Grow” on NetApp, or it’s sister function “Snap Auto Delete” the first of which is nice as it’s not disruptive to your backups, but you can’t grow when you’ve hit your peak! So your snapshots would then be at risk. Don’t put your snapshots at risk!
- Blog posts are not evidence for updating of Best Practices, nor does it change your defaults out of box. What am I talking about here? (Hi Dimitris!) Dimitri wrote this –great- blog post NetApp usable space – beyond the FUD whereby he goes into the depth and discussion of what we’ve been talking about these past weeks, he makes a lot of good points, and even goes so far as to validate a lot of what I’ve said, which I greatly appreciate. But taking things a little too far, he ‘recommends’ snap reserve 0, fractional reserve 0, snap autodelete on, etc. As a former NetApp engineer I would strongly recommend a lot of ‘changes’ to the defaults and the best practices as the use-case fit, however I did not set a holistic “Let’s win this capacity battle at the sake of compromising my customers data” And by blindly doing exactly what he suggested here, you are indeed putting your data integrity and recovery at risk.
I’ve noticed that.. I actually covered all of the other bullet points in this article without needing to actually drill into them separately. :) So, allow me to do some summing up on this coverage.
If we compare an EMC RAID6 Configuration to a NetApp RAID-DP Configuration, with file and block (at least 20% of each) using out of box default best practices, you will be able to achieve no compromise availability, no compromise efficiency regardless of data type, with no need to auto-delete your snapshots to gain results. So that’s a guarantee you can write home about, 20% guaranteed in ‘caveats’ you can fit into a single paragraph (and not a 96 page document ;))
Now, I’m sure, no.. Let me give a 100% guarantee… that someone is going to call ‘foul’ on this whole thing, and this will be the hot-bed post of the week, I completely get it. But what you the reader really are wondering “Yea, 20% Guarantee.. Guarantee of what? How am I supposed to learn about Unified?”
Welcome to the EMC Unified Storage – Next Generation Efficiency message!
Welcome to the EMC Unisphere – Next Generation Storage Management Simplicity
I mean, obviously once you’re over the whole debate of ‘storage, capacity, performance’ you want to actually be able to pay to play (or, $0 PO to play, right? ;))
But I say.. Why wait? We’re all intelligent and savvy individuals. What if I said you could in the comfort of your own home (or lab) start playing with this technology today with little effort on your behalf. I say, don’t wait. Go download now and start playing.
For those of you who are familiar with the infamous Celerra VSA as published in Chad’s blog numerous times New Celerra VSA (220.127.116.111) and Updated “SRM4 in a box” guide things have recently gone to a whole new level with the introduction of Nicholas Weaver’s UBER VSA! Besser UBER : Celerra VSA UBER v2 – Which takes the ‘work’ out of set up. In fact, all set up requires is an ESX Server, VMware Workstation, VMware Fusion (or in my particular case, I do testing on VMware Viewer to prove you can do it) and BAM! You’re ready to go and you have a Unified array at your disposal!
Celerra VSA UBER Version 2 – Workstation
Celerra VSA UBER Version 2 – OVA (ESX)
Though I wouldn’t stop there, if you’re already talking Unified and playing with File data at all, run don’t walk to download (and play with) the latest FMA Virtual Appliance! Get yer EMC FMA Virtual Appliance here!
But don’t let sillie little Powerpoint slides tell you anything about it, listen to talking heads on youtube instead :)
I won’t include all of the videos here, but I adore the way the presenter in this video says ‘series’ :) – But, deep dive and walk through in FMA in Minutes!
Okay! Fine! I’ve downloaded the Unified VSA, I’ve checked out FMA and seen how it might help.. but how does this help my storage efficiency message? What are you trying to tell me? If I leave you with anything at this point, let’s break it down into a few key points.
- Following best practices will garner you a 20% greater efficiency before you even start to get efficient with technologies like Thin Provisioning, FAST, Fast Cache, FMA, etc
- With the power of a little bandwidth, you’re able to download fully functional Virtual Appliances to allow you to play with and learn the Unified Storage line today.
- The power of managing your File Tiering architecture and Archiving policy is at your finger tips with the FMA Virtual Appliance.
- I apparently have too much time on my hands. (I actually don’t… but it can certainly look that way :))
- Talk to your TC, Rep, Partner (whoever) about Unified. Feel free to reference this blog post if you want, if there is nothing else to learn from this, I want you – the end user to be educated :)
- I appreciate all of your comments, feedback, positive and negative commentary on the subject. I encourage you to question everything, me, the competition, the FUD and even the facts. I research first, ask questions, ask questions later and THEN shoot. The proof is in the pudding. Or in my case, a unique form of Vegan pudding.
Good luck out there, I await the maelstrom, the fun, the joy. Go download some VSA’s, watch some videos, and calculate, calculate, calculate! Take care! – Christopher :)