EMC Unified Storage – Now community sized! Celerra and CLARiiON all grown up!

I recently was given insight into something which was up and coming (and well, clearly out at this point :)) The EMC Unified Storage Community!

What is this community though?  This is the mother load so to speak, YOUR source for Information in the Unified… and it’s available without a login! (Although I encourage a login so you can partake in questions, answers and more!)   So, let’s take this opportunity to take a little tour!

EMC Unified Storage Community

If you look at this massive page with so much to offer… it happens to break it ALL down for you with one key point to start with! – Notice on the right side of the screen..

image Wow, does this take all of the effort out of it! Making it so you can get started quick and easy! here are some details!

Getting Started in the Unified Storage community

VERSION 7  Click to view document history

Created on: Aug 16, 2010 3:51 PM by cornwk – Last Modified:  Sep 1, 2010 10:30 AM by cornwk

Thank you for visiting the Unified Storage community.

We hope you will be an active, participating member. But if you just want to view information only, that’s OK too.

Here are some suggestions for how you can get started in this community:
  1. To make sure that you can fully participate in this community, be sure to Login (or Register if this is your first visit to ECN).
    Click Login/Register at the top of the screen in the area that looks like this:
    https://community.emc.com/servlet/JiveServlet/showImage/102-7531-9-17181/logintext.png
  2. In the Unified Storage community, visit Breaking the Ice and tell us a little bit about yourself.
  3. Scan the current discussions and jump in – post a new discussion of your own or respond to somebody else’s question.
  4. Have fun!
If this is your first venture into ECN, visit the Quick Tour and learn how to put the communities to work for you!

75 Views Tags: midrange_storage , community , getting_started , welcome

What I particularly like about this Community though, is the fact that is it organic… driven by the community FOR the community!   It’s young right now (hey, it just launched!) and it will only continue to mature!   And I particularly want to thank the infamous cornwk @kcornwall for everything she did to make this a reality and to continue driving this forward!

Now, while it may all be relative because content changes regularly… the current threads out there are WOW ON TARGET! To things I see come up in meetings practically every day  – so I wanted to bring specific light and clarity to them!

Every time I sit down with folks and we discuss Celerra, CLARiiON, and the whole of the Unified stack… they say “I want to know more information about….” and the …. is ALL of these threads!

So, without further adieu, here are some of the hot links here in the community to get you going!

Obviously I advise you to defer to the actual community for new content and more! but I thought I’d high-light what are often heavily discussed items where people say “Hey! I want more info on this!” Not that I’m saying anyone in particular (hi mike! :)) Should check these out, but if you find these useful, and/or your name is Mike… Definitely check it out!

Oh, and if you’re looking into downloading the UBER VSA which I’ve referenced in the past … Definitely here is the link to v3.2! Your best friend in virtualization!

Play it again, Sam: Celerra UBER v3.2

Thanks guys, check out the community… grow, and learn, question and learn…. and communitize yourselves!

EMC Unisphere in your pocket! Announcing the UBER VSA 3! (Now with low sodium!)

You heard it here! Time to cut your blood pressure in half! (I apologize for those of you who already have low blood pressure.. this may put you over the top!)

UBERTastic : Celerra UBER VSA v3 – Unisphere -  Be sure to click this link here to get to the download links to pull down the OVA or Workstation version!

EMC Unisphere, now with less sodium!

So, Roxanne… what is new in this version of the VSA? As it appears that I’m practically stealing Nick’s entire post (which I’m cool with… ;))

  • DART is now 6.0.36.4
  • Unisphere management console (rocks!)
  • The Celerra VSA is now 64 bit! This means you can throw RAM at it for bigger setups and it will use it. Over 8GB becomes less beneficial without code changes to simulation services. Future updates will fix this from the Celerra VSA engineering teams.
  • The biggest and most difficult change to construct is that the configuration is now adaptive depending on the virtual machine setup. This version is now intelligent in seeing how many resources you have given it.
  • The new Celerra UBER VSA uses this intelligence to now allow *Thin* mode. If you give the VSA under 2GB of RAM it will automatically size the memory limits, processes, and management interface settings to allow it to run with as low as 1024MB of RAM. You won’t do replication or host a ton of VM’s but you can use this mode to host a few and fully demonstrate/test the new Unisphere interface on even a 2GB laptop.
  • The new VSA also uses this intelligence to automatically allow the configuration of single or dual Data Mover version based on the memory assigned. If you give the VSA more than 4GB of memory you will be given the option to enable an additional Data Mover for use as a standby or load balancing experimentation. This means this single appliance can be a small lightweight NFS unit at 1024MB of RAM or can be a 2 Data Mover powerhouse at 8GB of RAM. All automatically configured on first boot through the wizard.
  • Automatic VMDK/Storage additions have been adjusted for new 64 bit OS. This means this still works. Shutoff the VM, add VMDK(s), turn on and you have more space. Automagic
  • Since automagic is so cool, I have changed the Data Mover Ethernet binding to be automatic also. The VM starts with 1 interface for management and 1 interface for the Data Movers. If you want more for the DM(s), just shutoff the VM, add NIC cards (up to 6 additional), and turn back on. It will automatically bind the Data Mover (yes it works with the 2 DM mode also) to the new interfaces and virtual slots. Just go back into Unisphere and assign away. This allows scale up for the bigger 2 Data Mover 8GB of RAM versions easily.
  • Configuration is now Perl/Bash based instead of just Bash to keep things cleaner and slicker and allow for some coolness later on ;)
  • NTP from the configuration portion of the wizard works correctly. It sets both the Control Station and all Data Movers and enables NTP as a running service. Make sure your NTP server is valid.

    So let’s summarize:

    1. New Unisphere
    2. 64 Bit
    3. Automatic sizing
    4. Thin Mode
    5. Optional 2 Data Mover mode
    6. Automatic Data Mover Ethernet adding (along with fixed Storage [VMDK] adding)
    7. NTP works now

    Wow! That’s a whole lot! Where do I sign up to download?!?  UBERTastic : Celerra UBER VSA v3 – Unisphere – No signup required, just go click and download!  Because Nick has so many other vital details about the differences of THIS Uber VSA compared to Uber VSA’s in the past, I am referring you to his page so you can read the ‘technical’ details and stuff!  So go download the UBER VSA TODAY! (I am downloading it right now, literally.. )

    OMFG IM DOWNLOADING IT TOO

    I look forward to your feedback… and enjoyment of this tool, I know I’ve been waiting for some time for this myself!

    EMC didn’t invent Unified Storage; They Perfected it

    Hi Guys! Remember me! I’m apparently the one who upset some of you, enlightened others; and the rest of you.. well, you drove a lot of traffic here to get my blog to even beat out EMC’s main website as the primary source for information on "Unified Storage" (And for that, I appreciate it :))

    In case any of you forgot some of those "target" posts, here they are for your reference! but I’m not here to start a fight! I’m here to educate and to direct my focus on not what this previously OVERLY discussed Unified Storage Guarantee was or is, but instead to drive down in to what Unified Storage will really bring to bear.   So, without further adieu!

    What is Unified Storage?

    I’ve seen a lot of definitions of what it is, quite frankly a lot of stupid definitions too. (My GOD I hate stupid definitions!)  But what does it mean when you Unify to you and me?   I could go on and on about the various ‘definitions’ of what it really is (and I even started WRITING that portion of it!) but instead I’m going to scrap all of that so I do not end up on my own list of ‘stupid definitions’ and instead will define Unified Storage at it’s simplest terms.

    A unified storage system merges NAS and SAN. Optimized for performance and interoperability, the system simultaneously stores both file data and blocks of application data in virtually any operating environment

    You can put your own take and spin on it, but at it’s guts that is seemingly what the basics of a "Unified Storage" system are; nothing special about it, NAS and SAN (hey, lots of people do that right?!)  You bet they do!   And this is by no way the definitive definition on what “Unified Storage” is, and frankly that is not my concern either.   So taking things to the next level; now that we have a baseline of what it takes to ‘get the job done’, now it’s time to evaluate the Cost of Living in a Unified Storage environment.

    Unified Storage Architecture Cost of Living

    I get it.  No really I do.   And I’m sure by now you’re tired of the conversation of ‘uniqueness’ focused on the following core areas:

      • Support for Mixed Clients
      • Support for multiple types (tiers) of disk
      • Simplified Provisioning
      • Thin Provisioning
      • Improving Utilization

    All of these items are simply a FACT and an expectation when it comes to a Unified Platform.  (Forget unified, a platform in general)   Lack of support of multiple tiers, locking down to a single client, complicated provisioning which can only be done fat which makes you lose out on utilization and likely is a waste of time – That my friend is the cost of living.    You’re not going to introduce a wasteful fat obsolete system and frankly, I’m not sure of any (many) vendors who are actually delivering services which don’t meet on multiple of these criteria; So the question I’m asking is… Why do we continue to discuss these points?   I do not go to a car dealership and say “You know, I’m expecting a transmission in this car, you have a transmission right?”  And feel free to replace transmission with tires and other things you just flat out EXPECT.    It’s time to take the conversation to the next level though; because if you’ve ever talked to me you know how I feel about storage. “There is no inherent value of storage in and of itself without context or application.”   Thus… You don’t want spinning rust just for the sake to have it spin, no you want it to store something for you, and it is with that you need to invest in Perfection.

    Unified Storage Perfection

    What exactly is the idea of Unified Storage Perfection?   It is an epic nirvana whereby we shift from traditional thinking and takes NAS and SAN out of the business of merely rusty spindles and enable and engage the business to earn its keep.

    Enterprise Flash Disks

    Still storage, yet sexy in it’s own right.  Why?  First of all, it’s FAST OMFG FLASH IS SO FAST! And second of all, it’s not spinning, so it’s not annoying like the latest and greatest SAS, ATA or FC disk!    But what makes this particular implementation of EFD far sexier than simple consumer grade SSD’s is the fact that these things will guarantee you a consistent speed and latency through and through.   I mean, sure it’s nice that these things can take the sheer number of FC disks you’d need to run an aggressive SQL server configuration and optimize the system to perform, but it goes beyond that.   

    Fully Automated Storage Tiering (FAST)

    Think back to that high performance SQL workload you had a moment ago, there might come a time in the life of the business where your performance needs change; Nirvana comes a knocking and with the power of FAST enables you to dynamically, non-disruptively move from one tier of Storage (EFD, FC, SATA) to another, so you are guaranteed not only investment protection but scalability which grows and shrinks as your business does.    Gone are the days of ‘buy for what we might use one day’ and welcome are the days of Dynamic and Scalable business.

    FAST Cache

    Wow, is this the triple whammy or what?  Building upon the previous two points, this realm of Perfection is able to take the performance and speed of Enterprise Flash Disks and the concept of tiering your disks to let you use those same existing EFD disks to extend your READ and WRITE cache on your array!    FAST Cache accelerates performance to address unexpected workload spikes. FAST and FAST Cache are a powerful combination, unmatched in the industry, that provides optimal performance at the lowest possible cost.  (Yes I copied that from a marketing thingie, but it’s true and is soooooo cool!) 

    FAST + FAST Cache = Unified Storage Performance Nirvana

    So, let’s put some common sense on this then, because this is no joke, nor is it marketing BS.    You assign EFD’s to a specific workload you want to guarantee a certain speed and a certain response time (Win).    You have unpredictable workloads who may need to be fast some times, but may be slow other times on quarterly of yearly basis’s, so you leverage FAST to move that data around, but that’s your friend when you can PREDICT what is going to happen.    What about when it is slow most of the time, but then on June 29th you make a major announcement that you were not expecting to hit as hard as it did, and BAM! Your system goes in the tank because data sitting on FC or SATA couldn’t handle the load.   Hello FAST Cache, how I love you so.     Don’t get me wrong, I absolutely LOVE EFD’s and I wish all of my data could sit on them (At home a lot of it does ;)) and I have massive desire for FAST because I CAN move my workload around based upon predictable or planned patterns (Marry me!)  But FAST Cache is my superman, because he is there to save the day when I least expected it, he caches my reads when BOOM I didn’t know it was coming, but more importantly he holds my massive load of WRITES which come in JUST as unexpectedly.   So for you naysayers or just confused ones who wonder why you’d have one vs the other (vs) the other; Hopefully this example use-case is valuable.   Think about it in terms of your business, you could get away with one or the other, or all three… Either way, you’re a winner.

    Block Data Compression

    EMC is further advancing its storage efficiency innovation as the first storage provider to introduce block data compression, by allowing customers to compress inactive data and reclaiming valuable storage capacity— data footprints can be reduced by up to 50 percent. A common use case would be compressing inactive data once EMC FAST software has moved that data to the most cost-effective storage tier. Block data compression joins EMC’s existing capabilities, including thin provisioning and data deduplication, to automatically and transparently maximize storage utilization.

    Yea, I DID copy that verbatim from a Press Release – And do you know why? Because it’s right! Even addresses a pretty compelling use-case too!   So think about it a moment.  Does this apply to you?  I’d never compress ALL of my data (reminisces back to the days of DoubleSpace where let’s just say, for any of us who lived it… those were interesting times ;)) But think about the volume of data which you have sitting on Primary Storage which is inactive and otherwise wasting space when it continues sitting un-accessed and consuming maximum capacity!  But this is more than just about that data type, unlike some solutions this it not an all or nothing.

    Think if you could choose to compress on demand! Compress say… your virtual machine right out of vCenter! But wait there’s more!   And there’s so much more to say on this, let alone the things which are coming.. I don’t want to reveal what is coming, so I’ll let Mark Twomey do it where he did it here:  Storage Services for Clariion Storage Pool LUNs

    What does all of this mean for me and Unified Storage?!

    Whoa, hey now! What do you mean what does all of this mean?! Are you cutting me short?  Yes.  Yes I am. :)   There are some cool things coming, which I cannot talk about yet… and not to mention some of all of the new stuff coming in Q3 – But things I was talking about… that’s stuff I can talk about –TODAY- there’s only even better things and cake coming tomorrow :)

    I can fill this with videos, decks, resources, references, Unisphere and every thing under the sun (You let me know if you really want that.. I’ve done that in the past as well)  But ideally, I want you to make your own decision, come to your own conclusions..  What does this mean for you?   Stop asking “What is Unified Storage” and start asking “What value can my business derive from technologies in order to save money, save time, save waste!”    I’ll try to avoid writing yet another article on this subject unless you so demand it! I look forward to all of your comments and feedback! :)

    EMC 20% Unified Storage Guarantee: Final Reprise

    Hi! You might remember me from such blog posts as: EMC 20% Unified Storage Guarantee !EXPOSED! and the informational EMC Unified Storage Capacity Calculator – The Tutorial! – Well, here I’d like to bring to you the final word on this matter! (Well, my final word.. I’m sure well after I’m no longer discussing this… You will be, which is cool, I love you guys and your collaboration!)

    Disclaimer: I am in no way saying I am the voice of EMC, nor am I assuming that Mike Richardson is infact the voice of NetApp, but I know we’re both loud, so our voices are heard regardless :)

    So on to the meat of the ‘argument’ so to speak (That’d be some kind of vegan meat substitute being that I’m vegan!)

    EMC Unified Storage Guarantee

    Unified Storage Guarantee - EMC Unified Storage is 20% more efficient. Guaranteed.

    I find it’d be useful if I quote the text of the EMC Guarantee, and then as appropriate drill down into each selected section in our comparable review on this subject.

    It’s easy to be efficient with EMC.

    EMC® unified storage brings efficiency to a whole new level. We’ve even created a capacity calculator so you can configure efficiency results for yourself. You’ll discover that EMC requires 20% less raw capacity to achieve your unified storage needs. This translates to superior storage efficiency when compared to other unified storage arrays—even those utilizing their own documented best practices.

    If we’re not more efficient, we’ll match the shortfall

    If for some unlikely reason the capacity calculator does not demonstrate that EMC is 20% more efficient, we’ll match the shortfall with additional storage. That’s how confident we are.

    The guarantee to end all guarantees

    Storage efficiency is one of EMC’s fundamental strengths. Even though our competitors try to match it by altering their systems, turning off options, changing defaults or tweaking configurations—no amount of adjustments can counter the EMC unified storage advantage.

    Here’s the nitty-gritty, for you nitty-gritty types
    • The 20% guarantee is for EMC unified storage (file and block—at least 20% of each)
    • It’s based on out-of-the-box best practices
    • There’s no need to compromise availability to achieve efficiency
    • There are no caveats on types of data you must use
    • There’s no need to auto-delete snapshots to get results

    This guarantee is based on standard out-of-the-box configurations. Let us show you how to configure your unified storage to get even more efficiency. Try our capacity calculator today.

    Okay, now that we have THAT part out of the way.. What does this mean? Why am I stating the obvious (so to speak)  Let’s drill this down to the discussions at hand.

    The 20% guarantee is for EMC unified storage (file and block—at least 20% of each)

    This is relatively straight-forward.  It simply says “Build a Unified Configuration – which is Unified” SAN is SAN, NAS is NAS, but when you combine them together you get a Unified Configuration! – Not much to read in to that.  Just that you’re likely to see the benefit of 20% or greater in a Unified scenario, than you are in a comparable SAN or NAS only scenario.

    It’s based on out-of-the-box best practices

    I cannot stress this enough.   Out-Of-Box Best practices.   What does that mean?    Universally, I can build a configuration which will say to this “20% efficiency guarantee” Muhahah! Look what I did! I made this configuration which CLEARLY is less than 20%! Even going into the negative percentile! I AM CHAMPION GIVE ME DISK NOW!".   Absolutely.  I’ve seen it, and heard it touted (Hey, even humor me as I discuss a specific use-case which me and Mike Richardson have recently discussed.)    But building a one-off configuration which makes your numbers appear ‘more right’ v using your company subscribed best practices (and out of box configurations) is what is being proposed here.   If it weren’t for best practices we’d have R0 configurations spread across every workload, with every feature and function under the sun disabled to say ‘look what I can doo!”

    So, I feel it is important to put this matter to bed (because so many people have been losing their time and sleep over this debate and consideration)  I will take this liberty to quote from a recent blog post by Mike Richardson – Playing to Lose, Hoping to Win: EMC’s Latest Guarantee (Part 2)    In this article written by Mike he did some –great- analysis.  We’re talking champion.  He went through and used the calculator, built out use-cases and raid groups, really gave it a good and solid run through (which I appreciate!)   He was extremely honest, forthright and open and communicative about his experience, configuration and building this out with the customer in mind.   To tell you the truth, Mike truly inspired me to follow-up with this final reprise.

    Reading through Mike’s article I would like to quote (in context) the following from it:

    NetApp Usable Capacity in 20+2 breakdown

    The configuration I recommend is to the left.  With 450GB FC drives, the maximum drive count you can have in a 32bit aggr is 44.  This divides evenly into 2 raidgroups of 20+2.  I am usually comfortable recommending between 16 and 22 RG size, although NetApp supports FC raidgroup sizes up to 28 disks.  Starting with the same amount of total disks (168 – 3 un-needed spares), the remaining disks are split into 8 RAID DP raidgroups. After subtracting an additional 138GB for the root volumes, the total usable capacity for either NAS or SAN is just under 52TB.

    I love that Mike was able to share this image from the Internal NetApp calculator tool (It’s really useful to build out RG configurations) and it gives a great breakdown of disk usage.

    For the sake of argument for those who cannot make it out from the picture, what Mike has presented here is a 22 disk RAID-DP RG (20+2 disks – Made up of 168 FC450 disks with 7 spares) I’d also like to note that snapshot reserve has been changed from the default of 20% to 0% in the case of this example.

    Being I do not have access to the calculator tool which Mike used, I used my own spreadsheet run calculator which more or less confirms what Mike’s tool is saying to be absolutely true!   But this got me thinking!    (Oh no! Don’t start thinking on me now!)    And I was curious.   Hey, sure this deviates from best practices a bit, right? But BP’s change at times, right?

    So being that I rarely like to have opinions of my own, and instead like to base it on historical evidence founded factually and referenced in others… I sent the following txt message to various people I know (Some Former Netappians’s, some close friends who manage large scale enterprise NetApp accounts, etc (etc is for the protection of those I asked ;))

    The TXT Message was: “Would you ever create a 20+2 FC RG with netapp?”

    That seems pretty straight forward.   Right? Here is a verbatim summation of the responses I received.

    • Sorry, I forgot about this email.  To be brief, NO.
    • “It depends, I know (customer removed) did 28, 16 is the biggest I would do”
    • I would never think to do that… unless it came as a suggestion from NetApp for some perfemance reasons… (I blame txting for typo’s ;))
    • Nope we never use more then 16
    • Well rebuild times would be huge.

    So, sure this is a small sampling (of the responses I received) but I notice a resonating pattern there.   The resounding response is a NO.   But wait, what does that have to do with a hole in the wall?   Like Mike said, NetApp can do RG sizes of up to 28 disks.   Also absolutely 100% accurate, and in a small number of use-cases I have found situations in which people have exceeded 16 disk RG’s.   So, I decided to do a little research and see what the community has said on this matter of RG sizes. (This happened out of trying to find a Raid6 RG Rebuild Guide – I failed)

    I found a few articles I’d like to reference here:

    • Raid Group size 8, 16, 28?

      • According to the resiliency guide Page 11:

        NetApp recommends using the default RAID group sizes when using RAID-DP.

      • Eugene makes some good points here –

        • All disks in an aggregate are supposed to participate in IO operations.  There is a performance penalty during reconstruction as well as risks; "smaller" RG sizes are meant to minimize both.

        • There is a maximum number of data disks that can contribute space to an aggregate for a 16TB aggregate composed entirely of a give disk size, so I’ve seen RG sizes deviate from the recommended based on that factor (You don’t want/need a RG of 2 data+2parity just to add 2 more data disks to an aggr….). Minimizing losses to parity is not a great solution to any capacity issue.

        • my $0.02.

      • An enterprise account I’m familiar has been using NetApp storage since F300 days and they have tested all types of configurations and have found performance starts to flatline after 16 disks.  I think the most convincing proof that 16 is the sweet spot is the results on spec.org.  NetApp tests using 16 disk RAID groups.

    • Raid group size recommendation

        • Okay, maybe not the best reference considering I was fairly active in the response on the subject in July and August of 2008 in this particular thread.  Though read through it if you like, I guess the best take away I can get from it (which I happened to have said…)
          • I was looking at this from two aspects: Performance, and long-term capacity.
          • My sources for this were a calculator and capacity documents.
          • Hopefully this helped bring some insight into the operation  and my decisions around it.
            • (Just goes to show… I don’t have opinions… only citeable evidence Well, and real world customer experiences as well;))
      • Raid group size with FAS3140 and DS4243
        • I found this in the DS4243 Disk Shelf Technical FAQ document
        • WHAT ARE THE BEST PRACTICES FOR CONFIGURING RAID GROUPS IN FULLY LOADED CONFIGURATIONS?
        • For one shelf: two RAID groups with maximum size 12. (It is possible in this case that customers will configure one big RAID group of size 23–21 data and 2 parity; however, NetApp recommends two RAID groups).
      • Managing performance degradation over time
      • Aggregate size and "overhead" and % free rules of thumb.
      • Why should we not reserve Snap space for SAN volumes?
        • All around good information, conversation and discussion around filling up Aggr’s – No need to drill down to a specific point.

    So, what does all of this mean other than the fact that I appear to have too much time on my hands? :)

    Well, to sum up what I’m seeing and considering we are in the section titled ‘out of box best practices’

    1. Best Practices and recommendations (as well as expert guidance and general use) seem to dictate a 14+2, 16 disk RG
      1. Can that number be higher.  Yes, but that would serve to be counter to out-of-box best practices, not to mention it seems your performance will not benefit as seen in the comments mentioned above (and the fact that spec.org tests are run in that model)
    2. By default the system will have a reserve, and not set to 0% – so if I were to strip out all of the reserve which is there for a reason – my usable capacity will go up in spades, but I’m not discussing a modified configuration; I’m comparing against a default, out-of-box best practices configuration, which by default calls for a 5% aggr snap reserve, 20% vol snap reserve for NAS and a SAN Fractional Reserve of 100%
      1. Default Snapshot reserve, and TR-3483 helps provide backing information and discussion around this subject. (Friendly modifications from Aaron Delp’s NetApp Setup Cheat Sheet)
    3. In order to maintain these ‘out of box best practices’ and enable for a true model of thin provisioning (albeit, not what I am challenging here, especially being that Mike completely whacked the reserve space for snapshots – Nonetheless… in our guarantee side of the house we have the ‘caveat’ of “There’s no need to auto-delete snapshots to get results” – Which is simply saying, Even if you were to have your default system out of box, in order to achieve, strive and take things to the next level you would need to enable “Volume Auto-Grow” on NetApp, or it’s sister function “Snap Auto Delete” the first of which is nice as it’s not disruptive to your backups, but you can’t grow when you’ve hit your peak! So your snapshots would then be at risk.   Don’t put your snapshots at risk!
    4. Blog posts are not evidence for updating of Best Practices, nor does it change your defaults out of box.   What am I talking about here?  (Hi Dimitris!)   Dimitri wrote this –great- blog post NetApp usable space – beyond the FUD whereby he goes into the depth and discussion of what we’ve been talking about these past weeks, he makes a lot of good points, and even goes so far as to validate a lot of what I’ve said, which I greatly appreciate.    But taking things a little too far, he ‘recommends’ snap reserve 0, fractional reserve 0, snap autodelete on, etc.    As a former NetApp engineer I would strongly recommend a lot of ‘changes’ to the defaults and the best practices as the use-case fit, however I did not set a holistic “Let’s win this capacity battle at the sake of compromising my customers data”   And by blindly doing exactly what he suggested here, you are indeed putting your data integrity and recovery at risk.   

    I’ve noticed that.. I actually covered all of the other bullet points in this article without needing to actually drill into them separately.  :) So, allow me to do some summing up on this coverage.

    If we compare an EMC RAID6 Configuration to a NetApp RAID-DP Configuration, with file and block (at least 20% of each) using out of box default best practices, you will be able to achieve no compromise availability, no compromise efficiency regardless of data type, with no need to auto-delete your snapshots to gain results.   So that’s a guarantee you can write home about, 20% guaranteed in ‘caveats’ you can fit into a single paragraph (and not a 96 page document ;))

    Now, I’m sure, no.. Let me give a 100% guarantee… that someone is going to call ‘foul’ on this whole thing, and this will be the hot-bed post of the week, I completely get it.   But what you the reader really are wondering “Yea, 20% Guarantee.. Guarantee of what? How am I supposed to learn about Unified?”

    Welcome to the EMC Unified Storage – Next Generation Efficiency message!

    Welcome to the EMC Unisphere – Next Generation Storage Management Simplicity

    I mean, obviously once you’re over the whole debate of ‘storage, capacity, performance’ you want to actually be able to pay to play (or, $0 PO to play, right? ;))

    But I say.. Why wait?  We’re all intelligent and savvy individuals.  What if I said you could in the comfort of your own home (or lab) start playing with this technology today with little effort on your behalf.     I say, don’t wait.   Go download now and start playing.

    For those of you who are familiar with the infamous Celerra VSA as published in Chad’s blog numerous times New Celerra VSA (5.6.48.701) and Updated “SRM4 in a box” guide things have recently gone to a whole new level with the introduction of Nicholas Weaver’s UBER VSA!  Besser UBER : Celerra VSA UBER v2 – Which takes the ‘work’ out of set up.  In fact, all set up requires is an ESX Server, VMware Workstation, VMware Fusion (or in my particular case, I do testing on VMware Viewer to prove you can do it) and BAM! You’re ready to go and you have a Unified array at your disposal!

    Celerra VSA UBER Version 2 – Workstation
    Celerra VSA UBER Version 2 – OVA (ESX)

    Though I wouldn’t stop there, if you’re already talking Unified and playing with File data at all, run don’t walk to download (and play with) the latest FMA Virtual Appliance! Get yer EMC FMA Virtual Appliance here!

    Benefits of Automated File Tiering/Active Archiving

    But don’t let sillie little Powerpoint slides tell you anything about it, listen to talking heads on youtube instead :)

    I won’t include all of the videos here, but I adore the way the presenter in this video says ‘series’ :) – But, deep dive and walk through in FMA in Minutes!

      Okay! Fine! I’ve downloaded the Unified VSA, I’ve checked out FMA and seen how it might help.. but how does this help my storage efficiency message? What are you trying to tell me?  If I leave you with anything at this point, let’s break it down into a few key points.

      • Following best practices will garner you a 20% greater efficiency before you even start to get efficient with technologies like Thin Provisioning, FAST, Fast Cache, FMA, etc
      • With the power of a little bandwidth, you’re able to download fully functional Virtual Appliances to allow you to play with and learn the Unified Storage line today.
      • The power of managing your File Tiering architecture and Archiving policy is at your finger tips with the FMA Virtual Appliance.
      • I apparently have too much time on my hands.  (I actually don’t… but it can certainly look that way :))
      • Talk to your TC, Rep, Partner (whoever) about Unified.   Feel free to reference this blog post if you want, if there is nothing else to learn from this, I want you – the end user to be educated :)
      • I appreciate all of your comments, feedback, positive and negative commentary on the subjectI encourage you to question everything, me, the competition, the FUD and even the facts.   I research first, ask questions, ask questions later and THEN shoot.    The proof is in the pudding.  Or in my case, a unique form of Vegan pudding.

      Good luck out there, I await the maelstrom, the fun, the joy.   Go download some VSA’s, watch some videos, and calculate, calculate, calculate!   Take care! – Christopher :)

      EMC Unified Storage Capacity Calculator – The Tutorial!

      The latest update to this is included here in the Final Reprise! EMC 20% Unified Storage Guarantee: Final Reprise

      After all of the brouhaha and discussion from a recent post EMC 20% Unified Storage Guarantee !EXPOSED! I thought it valuable to dive a little deeper into our own calculator.

      EMC Storage Guarantee

      I’m sure like me, some of you may have tried to use the calculator and found it to be really cool, but you also may have experienced a few bouts of frustration.   It’s okay, I completely get it.  I get it so much that I’m writing this article to help reveal some of the challenges and how to overcome them.

      For starters, one of the coolest bits about the EMC Unified Storage Capacity Calculator… is the fact it has a –help- option right there on screen.  I totally get it if you didn’t notice it, or feel you’re above ‘help’, I’m with you wholly! But I decided ‘Why not.. what does the “?” unveil? Wait for it… it unveils secrets to your success! And a breakdown of the ‘sauce’ so to speak!

      EMC Unified Storage Capacity Calculator

      When you first launch the Capacity Calculator for the first time, you should see a screen which looks like this – It defaults to a NX4 with nothing configured or set up.

      Configuration and Templates

      NAS (Templates) SAN (Custom) Capacity Breakdown

      Regardless of which System Model you choose, NX4, NS-120, NS-480 or NS-960, the “?” help text for the NAS/SAN/Breakdown will be the same across the board.  Exceptions being that the help file will specifically declare whether you’re looking at a specific/respective model.   

      SAN Custom Configuration Not Enough Space - Error! 

       

      As you start to fill the system with disks you may at some point come across an error such as ‘not enough space’, this will usually come up when you’re playing around with SAN configurations, or NAS (Custom) configs.    There is no need to be worried or alarmed when this happens.   All this is saying is that based upon the configuration you have ‘defaulted’ in the column/tab you’re working in, there is not enough “space” in that particular tray to add the disks.  

      Adding Hot Spares to ConfigurationMoving Between Trays

      There are two ways to resolve this.    Either change to a disk format you can work with (Such as HS (Hot Spare)) as seen above, or using the arrows in the System Model diagram, you can move to another tray entirely!

      System Models

      NX4 System Model NS-120 System Model NS-480 System Model NS-960 System Model

      What I find to be particularly useful and cool is the fact that when you select a particular system, hover over the “?” in the System Model section is that it will give you a breakdown of details about the system. (No more needing to go search the internet or call your TC asking “How many drives will my system take!?”  Not only that, but it also provides you with details of how you’d go about building this configuration – both in this simulator so to speak, and respectively when you go live with this as a real configuration.   Sweet if you ask me!

      Total Usable Capacity

      NX4 Total Usage Capacity NS-120 Total Usage Capacity NS-480 Total Usage Capacity NS-960 Total Usage Capacity

      One particularly useful and cool bit about this is it not only tells you specifically what kinds of disks are required of the system, one particular complaint I’ve heard from some folks was about not knowing how many Spares were recommended in their configuration.  Well, check out the ‘caution’ symbol!

      Unrealistic Configuration on an NS-960 Required Hot Spares for an NS-960 Configuration

      I built the following un-realistic configuration so we could drill down in to the system to see what it will report for ‘spares’ required.  Based upon this example, it looks like I need spares of every type, EFC, FC and ATA!    (I populated a tray of each type of disk to make this as unrealistic as possible :))  Pretty cool if you ask me!

      But for the most part, this accounts for all of the ‘errors’, common or otherwise which I am noticing are encountered using this calculator.

      Feel free to give it a good run through, but I’m so glad to see that a majority (read:all) of our concerns of how it operates and functions are actually solved right here in the help file! And in the case of SAN Custom Configuration (read:lack of templates) the little workaround for ‘lack of space’ above seems to address that in whole!

      I hope you find the EMC Unified Storage Capacity Calculator to be as cool as I do, and that you get the best out of it!

      Thanks and for those of you who haven’t played with the Celerra Virtual Appliance yet – Go download the Uber version here! Besser UBER : Celerra VSA UBER v2 (That’ll give you the ability to play around with the Celerra today without having to buy the hardware… nothing spells getting familiar than actually playing with a fully functioning system!)

      Thanks, and good luck!

      The latest update to this is included here in the Final Reprise! EMC 20% Unified Storage Guarantee: Final Reprise