EMC didn’t invent Unified Storage; They Perfected it

Hi Guys! Remember me! I’m apparently the one who upset some of you, enlightened others; and the rest of you.. well, you drove a lot of traffic here to get my blog to even beat out EMC’s main website as the primary source for information on "Unified Storage" (And for that, I appreciate it :))

In case any of you forgot some of those "target" posts, here they are for your reference! but I’m not here to start a fight! I’m here to educate and to direct my focus on not what this previously OVERLY discussed Unified Storage Guarantee was or is, but instead to drive down in to what Unified Storage will really bring to bear.   So, without further adieu!

What is Unified Storage?

I’ve seen a lot of definitions of what it is, quite frankly a lot of stupid definitions too. (My GOD I hate stupid definitions!)  But what does it mean when you Unify to you and me?   I could go on and on about the various ‘definitions’ of what it really is (and I even started WRITING that portion of it!) but instead I’m going to scrap all of that so I do not end up on my own list of ‘stupid definitions’ and instead will define Unified Storage at it’s simplest terms.

A unified storage system merges NAS and SAN. Optimized for performance and interoperability, the system simultaneously stores both file data and blocks of application data in virtually any operating environment

You can put your own take and spin on it, but at it’s guts that is seemingly what the basics of a "Unified Storage" system are; nothing special about it, NAS and SAN (hey, lots of people do that right?!)  You bet they do!   And this is by no way the definitive definition on what “Unified Storage” is, and frankly that is not my concern either.   So taking things to the next level; now that we have a baseline of what it takes to ‘get the job done’, now it’s time to evaluate the Cost of Living in a Unified Storage environment.

Unified Storage Architecture Cost of Living

I get it.  No really I do.   And I’m sure by now you’re tired of the conversation of ‘uniqueness’ focused on the following core areas:

    • Support for Mixed Clients
    • Support for multiple types (tiers) of disk
    • Simplified Provisioning
    • Thin Provisioning
    • Improving Utilization

All of these items are simply a FACT and an expectation when it comes to a Unified Platform.  (Forget unified, a platform in general)   Lack of support of multiple tiers, locking down to a single client, complicated provisioning which can only be done fat which makes you lose out on utilization and likely is a waste of time – That my friend is the cost of living.    You’re not going to introduce a wasteful fat obsolete system and frankly, I’m not sure of any (many) vendors who are actually delivering services which don’t meet on multiple of these criteria; So the question I’m asking is… Why do we continue to discuss these points?   I do not go to a car dealership and say “You know, I’m expecting a transmission in this car, you have a transmission right?”  And feel free to replace transmission with tires and other things you just flat out EXPECT.    It’s time to take the conversation to the next level though; because if you’ve ever talked to me you know how I feel about storage. “There is no inherent value of storage in and of itself without context or application.”   Thus… You don’t want spinning rust just for the sake to have it spin, no you want it to store something for you, and it is with that you need to invest in Perfection.

Unified Storage Perfection

What exactly is the idea of Unified Storage Perfection?   It is an epic nirvana whereby we shift from traditional thinking and takes NAS and SAN out of the business of merely rusty spindles and enable and engage the business to earn its keep.

Enterprise Flash Disks

Still storage, yet sexy in it’s own right.  Why?  First of all, it’s FAST OMFG FLASH IS SO FAST! And second of all, it’s not spinning, so it’s not annoying like the latest and greatest SAS, ATA or FC disk!    But what makes this particular implementation of EFD far sexier than simple consumer grade SSD’s is the fact that these things will guarantee you a consistent speed and latency through and through.   I mean, sure it’s nice that these things can take the sheer number of FC disks you’d need to run an aggressive SQL server configuration and optimize the system to perform, but it goes beyond that.   

Fully Automated Storage Tiering (FAST)

Think back to that high performance SQL workload you had a moment ago, there might come a time in the life of the business where your performance needs change; Nirvana comes a knocking and with the power of FAST enables you to dynamically, non-disruptively move from one tier of Storage (EFD, FC, SATA) to another, so you are guaranteed not only investment protection but scalability which grows and shrinks as your business does.    Gone are the days of ‘buy for what we might use one day’ and welcome are the days of Dynamic and Scalable business.

FAST Cache

Wow, is this the triple whammy or what?  Building upon the previous two points, this realm of Perfection is able to take the performance and speed of Enterprise Flash Disks and the concept of tiering your disks to let you use those same existing EFD disks to extend your READ and WRITE cache on your array!    FAST Cache accelerates performance to address unexpected workload spikes. FAST and FAST Cache are a powerful combination, unmatched in the industry, that provides optimal performance at the lowest possible cost.  (Yes I copied that from a marketing thingie, but it’s true and is soooooo cool!) 

FAST + FAST Cache = Unified Storage Performance Nirvana

So, let’s put some common sense on this then, because this is no joke, nor is it marketing BS.    You assign EFD’s to a specific workload you want to guarantee a certain speed and a certain response time (Win).    You have unpredictable workloads who may need to be fast some times, but may be slow other times on quarterly of yearly basis’s, so you leverage FAST to move that data around, but that’s your friend when you can PREDICT what is going to happen.    What about when it is slow most of the time, but then on June 29th you make a major announcement that you were not expecting to hit as hard as it did, and BAM! Your system goes in the tank because data sitting on FC or SATA couldn’t handle the load.   Hello FAST Cache, how I love you so.     Don’t get me wrong, I absolutely LOVE EFD’s and I wish all of my data could sit on them (At home a lot of it does ;)) and I have massive desire for FAST because I CAN move my workload around based upon predictable or planned patterns (Marry me!)  But FAST Cache is my superman, because he is there to save the day when I least expected it, he caches my reads when BOOM I didn’t know it was coming, but more importantly he holds my massive load of WRITES which come in JUST as unexpectedly.   So for you naysayers or just confused ones who wonder why you’d have one vs the other (vs) the other; Hopefully this example use-case is valuable.   Think about it in terms of your business, you could get away with one or the other, or all three… Either way, you’re a winner.

Block Data Compression

EMC is further advancing its storage efficiency innovation as the first storage provider to introduce block data compression, by allowing customers to compress inactive data and reclaiming valuable storage capacity— data footprints can be reduced by up to 50 percent. A common use case would be compressing inactive data once EMC FAST software has moved that data to the most cost-effective storage tier. Block data compression joins EMC’s existing capabilities, including thin provisioning and data deduplication, to automatically and transparently maximize storage utilization.

Yea, I DID copy that verbatim from a Press Release – And do you know why? Because it’s right! Even addresses a pretty compelling use-case too!   So think about it a moment.  Does this apply to you?  I’d never compress ALL of my data (reminisces back to the days of DoubleSpace where let’s just say, for any of us who lived it… those were interesting times ;)) But think about the volume of data which you have sitting on Primary Storage which is inactive and otherwise wasting space when it continues sitting un-accessed and consuming maximum capacity!  But this is more than just about that data type, unlike some solutions this it not an all or nothing.

Think if you could choose to compress on demand! Compress say… your virtual machine right out of vCenter! But wait there’s more!   And there’s so much more to say on this, let alone the things which are coming.. I don’t want to reveal what is coming, so I’ll let Mark Twomey do it where he did it here:  Storage Services for Clariion Storage Pool LUNs

What does all of this mean for me and Unified Storage?!

Whoa, hey now! What do you mean what does all of this mean?! Are you cutting me short?  Yes.  Yes I am. :)   There are some cool things coming, which I cannot talk about yet… and not to mention some of all of the new stuff coming in Q3 – But things I was talking about… that’s stuff I can talk about –TODAY- there’s only even better things and cake coming tomorrow :)

I can fill this with videos, decks, resources, references, Unisphere and every thing under the sun (You let me know if you really want that.. I’ve done that in the past as well)  But ideally, I want you to make your own decision, come to your own conclusions..  What does this mean for you?   Stop asking “What is Unified Storage” and start asking “What value can my business derive from technologies in order to save money, save time, save waste!”    I’ll try to avoid writing yet another article on this subject unless you so demand it! I look forward to all of your comments and feedback! :)

Post-Mortem 70-693 Pro: Windows Server 2008 R2, Virtualization Administrator: Why I said “Wow”

Hey guys, it’s been a long while since I’ve done a Post-Mortem on an exam.. I just didn’t feel like it from the last few betas I took – So here you go, with so much interest in the Hyper-V exam here is my post-mortem analysis and not to mention what I felt about it, and why I said “Wow” :)

Pro: Windows Server 2008 R2, Virtualization Administrator

About this Exam

This exam validates a candidates knowledge of Microsoft virtualization technologies.

Audience Profile

Candidates should have one to three years of experience using Microsoft virtualization products, including Hyper-V, System Center Virtual Machine Manager, and Remote Desktop Services (RDS), in a Windows Server 2008 R2 infrastructure. Candidates for this exam are IT professionals who have jobs in which managing or deploying virtualization technologies is their main area of responsibility.

Credit Toward CertificationExam 70-693: Pro: Windows Server 2008 R2, Virtualization Administrator: counts as credit toward the following certification(s):

Microsoft Certified IT Professional: Windows Server 2008 R2, Virtualization Administrator

So, there is the high level view of the exam as listed at Pro: Windows Server 2008 R2, Virtualization Administrator and one of the most useful tools you will find on that page is the “Skills Measured” tab which happens to give you a comprehensive overview of what kind of content there is on the exam – If you follow that list and rule, you will indeed be prepared if you study against the skills measured!  I do want to note, I HIGHLY encourage you to check out the ‘Skills Measured’ from TS: Windows Server Virtualization, Configuring – Seriously!  – A slight disclaimer here.. I mistakenly wrote the reference material against last years 70-652 TS: Windows Server Virtualization, Configuring – But take it for what it is.. Combine the two ‘skills measured’ from both exams and your chances of passing will increase exponentially!

Now what may be beneficial is a comprehensive understanding of… competitive pressures? Would you call it that? I have to say, I saw a damn lot of another vendors virtualization product (Some might call it, the largest virtualization product in the industry, not to mention the most deployed)   In the “Installing Hyper-V” section, as seen in Skills Measured, it mentions very briefly a coverage of clustering, storage – shared and otherwise – accounting for 14% of the exam.  To me it honestly felt more like 45% of the exam had some focus on Storage or Clustering.  I haven’t seen that much iSCSI, and FCP touted in a long time! (Take my NFS and CIFS Please! – Oh, yea while not mentioned, you probably want to ensure you’re up on the entire protocol stack, grin :))

Next, if you look across all 4 Skill areas, you’ll notice SCVMM is included in there.   Yea, there’s a reason for that.   Infact, I’d be surprised if there were any questions which DIDN’T include SCVMM! I say ‘mostly’ in jest, because it makes you wonder ‘Is this Hyper-V, or a purely SCVMM exam?!?” :)

As far as annoying faults in the tests go, I only found one major syntactical error which I reported, but on the whole the test itself was well formed and the questions were free of Grammatical mistakes.   Now, let’s get into the Wow section.

Perhaps I was a bit hasty when I said “Wow” about this exam.  Perhaps I should have placed myself more into the category of WTF?!?   So, feel free to see an intermingling of my thoughts on the exam now :)   The questions were well formed, perhaps even a little too well formed.   A number of them looked as though they were struggling to find examples of what WASN’T the right answer, because they were all pretty damn easy to answer in and out!   Am I saying I passed? There’s a pretty good chance, but I place no bets!    If you are NOT up on the competitive landscape as far as where Hyper-V plays in the industry, you better be to take this exam.  I wasn’t sure if I was sitting for the VCP, a minor in Citrix, or if this was infact an actual Microsoft Exam! (Yes, I know it was a Microsoft exam because all of the questions WERE very well formed, and a number of them… were sadly still written to the old adage of ‘Choose the microsoft answer’ ;)

This exam also included the recent name changes to products, so I commend it’s accuracy!    And the intimate level of focus on VDI – was quite amazing, but sadly I reach a saddening point.

If I am to fail in this exam under any circumstances? It is because of the number of ‘it depends’ questions they had in there.   What does that mean?   I’m sure providing details about how many interfaces you should have and factual information backing it is PERFECTLY okay, I can sign off on that – No problem, albeit Best Practice and ‘minimal acceptable’ is further subjective.   But when it comes to degrees of scale and how many VM’s I can actually host on a particular server?   Without raw details, a breakdown of workload, and not to mention this isn’t a different vendors solution so the pure economies of scale require me to be EXTREMELY conservative.   I’m not being negative I’m being factual, we all know that – and we know JUST how subjective things are when it comes to VM density.   With that said be very careful, I have no guidance there other than try to find out what the proverbial ‘microsoft answer’ is for what density looks like I’ve always seen it published as ‘not as much as others’ and some of the deployments in the exam outright scared me – And I don’t get scared by technology, I put fear into it’s heart!

I’m FAIRLY certain I didn’t say anything which violates NDA, since pretty much everything included here is referenced in the Skills Measured page Pro: Windows Server 2008 R2, Virtualization Administrator but incase I did… don’t spank me! preferably fix the questions which are wrong (glares in Liberty’s direction ;)) And… Well, have a good time – Use of the technology and understanding these skills are pretty much all you need in order to pass!

Now on a personal note! I’m going to be running the Boston Marathon in a few months in order to raise money for disabled children and every single dollar helps, so if you can help me in my cause these children and their families will greatly appreciate it!   Even if you can only afford $1 that’s perfectly fine! The more people who contribute the better!

http://www.firstgiving.com/cxi – Help sponsor my run in the Boston Marathon on behalf of disabled children!

http://www.firstgiving.com/cxi - Help sponsor my run in the Boston Marathon on behalf of disabled children!

So, thank you all and I hope you find something useful from this post-mortem and truly every $1 helps, and I greatly appreciate it!  Thanks!

Chicago Windows Users Group Enterprise Meeting! Oct22, Chicago AON Center

The day will be October 22nd, which will lead host to not only the next Chicago Windows Users Group meeting, but it’ll also be a Windows 7 Party!

I’ll have some of the prizes/giveaways for the Windows 7 party with me, but lo and behold here are the details for the Chicago meeting of the CWUG!

We had our Annual PC Recycle at our September meeting in Downers Grove.  Brian Jones arranged for ATEN to join us in the parking lot of the Microsoft Downers Grove parking lot again this year (thanks Brian!).  This was also our 2nd consumer focused CWUG – Jeri Stodola gave an example of our very first birds of a feather session and has earned her very own Windows 7 Backpack!  We’re still looking for topic suggestions and facilitators for our 2nd Enterprise focused meeting which will be held downtown Chicago in the afternoon. Sign up information is provided below NOTE:  the date was changed to October 22nd.

October Meeting – Enterprise Focused

Thursday, October 22nd

12:30 – 1:30 bring lunch/network

1:30 – 1:45  meeting begins

1:45 – 4:15 Topics and Birds of a Feather sessions

Session 1:  HyperV

Session 2:  Diagnostics and Recovery Toolset (DART)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032424373&culture=en-US

Going forward, we will use the CWUG online group to share information, communicate with each other and schedule meetings:  http://cwug.groups.live.com

Be sure to SIGN Up for the CWUG and I will see you there!

Virtualization, HyperV and Microsoft oh my! (Beta time!)

OMG! It’s Beta Thursday! Well, kind of… it’s the release of a ‘call for SME’s’ for the future Windows Server virtualization (re:Hyper-V) beta exam 70-659!

It will formally go by the name: Exam 70-659, TS: Windows Server 2008 R2, Server Virtualization, which is frankly pretty sweet!   So if you’re an expert, I suggest you update your SME profile and get yourself in the running for setting the pace of the futures!

You can find similar information and how to get an SME profile via this post from the other day, Exchange 2010 Beta Exams are calling you! Update your SME Profile today!

However, for the ‘clicking impaired’ feel free to follow these steps!

  • Visit the Connect Home Page
  • Click on “Were you invited to join Connect?”
  • Put this invitation code into the box: SME2-JC3G-DKDY
  • Fill out the survey/profile

Wow, it’s that easy!

RTM-Weekend! Win7, 2008 R2, Boot from VHD and more!

Yay! It’s RTM Weekend! Alright, not for everyone, because as we all are patiently waiting for August 6th as RTM hits TechNet and MSDN, but I needed to get the jump on things because I think I’m busy next weekend!

So, what does RTM weekend entail for me?  Testing was the first ground.   Testing installations on my hardware, and getting a feel for how I’ll architect my deployment model for Win7 and 2008R2!

First things first – Create bootable VHD Images to run my OS out of.    Yes, I planned to deploy my systems via Boot from VHD, so I needed to create bootable images! And for this little decision, I opted to take advantage of WIM2VHD! So, what exactly is WIM2VHD?  Well, that’s pretty simple to explain!

The Windows(R) Image to Virtual Hard Disk (WIM2VHD) command-line tool allows you to create sysprepped VHD images from any Windows 7 installation source. VHDs created by WIM2VHD will boot directly to the Out Of Box Experience, ready for your first-use customizations. You can also automate the OOBE by supplying your own unattend.xml file, making the possibilities limitless.
Fresh squeezed, organically grown, free-range VHDs – just like Mom used to make – that work with Virtual PC, Virtual Server, Microsoft Hyper-V, and Windows 7’s new Native VHD-Boot functionality!

All you need in order to be successful with WIM2VHD is:

  • A computer running one of the following Windows operating systems:
    • Windows 7 Beta or RC (or RTM)
    • Windows Server 2008 R2 Beta or RC (or RTM)
    • Windows Server 2008 with Hyper-V RTM enabled (x64 only)
  • The Windows 7 RC Automated Installation Kit (AIK) or Windows OEM Pre-Installation Kit (OPK) installed.
  • A Windows 7 or Windows Server 2008 R2 installation source, or another Windows image captured to a .WIM file.

Then, simply execute a command like I did below and you’re moving along!

Create a bootable VHD of Windows 7 Ultimate
cscript WIM2VHD.WSF /wim:D:\sources\install.wim /sku:ultimate /VHD:C:\vhd\win7ult.vhd

Create a bootable VHD of Windows Server 2008 R2 Enterprise
cscript WIM2VHD.WSF /wim:D:\sources\install.wim /sku:serverenterprise /VHD:C:\vhd\R2Ent.vhd

This frankly takes care of most of the work on your  behalf! (Sure did for me!)

FYI: The image defaults to 40gb, so if you want to change that, use this flag /size:<vhdSizeInMb>

After this point all you need to do is bcdedit and make the system bootable and you’re set!

bcdedit /copy {current} /d “New VHD Description”
    bcdedit /copy {current} /d “Windows 7 Ultimate”
bcdedit /set <guid> device vhd=[driveletter:]\<directory>\<vhd filename>
    bcdedit /set {GUID} device vhd=[c:]\vhd\win7ult.vhd
bcdedit /set <guid> osdevice vhd=[driverletter:]\<directory>\<vhd filename>
    bcdedit /set {GUID} osdevice vhd=[c:]\vhd\win7ult.vhd
bcdedit /set <guid> detecthal on
    bcdedit /set {GUID} detecthal on

And you can perform those same exact steps again for your 2008 R2 VHD as well.   It’s not only pretty straight forward, but it’s so simple anyone can do it! After performing those steps I was up and running on a system which had no data, nothing, notta!

Now, to apply some context and depth to how I chose to use my deployment model.  I’m running on my personal Lenovo T61p, which I have a Kingston 128GB SSD disk inside of.   Because I wanted to have ‘some’ kind of Native OS in order to help work on anything should something go wrong, I opted for a 2008 Server R2 Enterprise (Core) installation.  That gives me a minimal foot print, yet an OS I can feel comfortable and confident in being able to work on and with!  

What this enables is my NOS which runs on the “C:” drive, and has a VHD directory where my images live.  However, when I’m booted into either of my BootFromVHD OS’s on here, the native SSD becomes the “D:” drive whereby I can share files between the two systems!   However, if you forget to copy something to the shared volume and need to access it, feel free to use the mount VHD feature in the Disk Management tool (or Storage in 2008)

image image

I personally prefer to mount it read-only because… I don’t want to take any risks, especially when it comes to “Anti-Virus” or other things. (Unless that is my specific intention)

Now that you have a working and operational system you’re good to go! And if you stick with a NativeOS for Maintenance reasons, you can use it to take hard backups of your VHD’s for migration to other hardware or general recovery to other points in time! (note: You can backup the un-used OS from your active OS if you’d like as well :))

So, hope you have a good RTM weekend coming up, I look forward to being able to generate and use my license keys come August 6th!