NS1-050 Post Mortem: NetApp Installation Accredited Professional

What is there to say about NS1-050…

Avoid taking it if you don’t have a good understanding of FAS equipment and Data ONTAP

It doesn’t require an advanced understanding, but a good solid foundation, having worked with equipment for some time, as well as some basic diagnostic and configuration work.

Still nothing published about this on the NetApp Certification site, but betas are as betas will be!

Good luck taking it!

New NetApp beta exam! (NS1-050) Installation Accredited Professional

As you can tell from past postings – I keep an eye out for Certification exams.
I usually check specifically for Microsoft ones, and I also keep my eyes out for NetApp exams.

So, with that – NS1-050 is available in Beta until August 15th

It clocks in as a 3:15 minute exam, so normal par for the course of a NetApp Beta Exam
These type of exams are simply $0.00 cost, no associated Promotion code nor Voucher needed.

I imagine we’re looking at a fairly straight forward “What is involved in Installation

Where the exam will play out in the big picture – hard to tell, I cannot find any details on this!

But nonetheless – you Storage guys out there, interested in taking it, knock it out!
If I find any details posted about this, I’ll be sure to let you know!

When security best practices collide (Crippling iSCSI in Windows)

As a security guy, I can tell you – There are a lot of really good security best practices to be applied across all systems, applications, servers and a world over. But when implemented unchecked – Problems will arise.

What I am talking about specifically is this little doozy – EnablePMTUDiscovery

Value name: EnablePMTUDiscovery
Key: Tcpip\Parameters
Value Type: REG_DWORD
Valid Range: 0, 1 (False, True)
Default: 1 (True)

The following list describes the parameters that you can use with this registry value:

  • 1: When you set EnablePMTUDiscovery to 1, TCP attempts to discover either the maximum transmission unit (MTU) or then largest packet size over the path to a remote host. TCP can eliminate fragmentation at routers along the path that connect networks with different MTUs by discovering the path MTU and limiting TCP segments to this size. Fragmentation adversely affects TCP throughput.
  • 0: It is recommended that you set EnablePMTUDiscovery to 0. When you do so, an MTU of 576 bytes is used for all connections that are not hosts on the local subnet. If you do not set this value to 0, an attacker could force the MTU value to a very small value and overwork the stack.

    Important Setting EnablePMTUDiscovery to 0 negatively affects TCP/IP performance and throughput. Even though Microsoft recommends this setting, it should not be used unless you are fully aware of this performance loss.

    That little excerpt taken from:
    How to harden the TCP/IP stack against denial of service attacks in Windows 2000

    This KB article is still used and is applicable to the Windows 2003 space, but what does this do exactly?

    This will drop all transmissions over TCP/IP down to 576 byte packets. Oh and this is a global setting.
    So, you go to connect up to an iSCSI LUN, and it connects up just fine.
    Your host is working, your storage is working everything is all doozy.

    When you start to try to actually -use- that connection for storage though, you’ll begin to experience exponential latency. This latency will translate into IOPS problems and access to the disk, masking this making it appear to be a disk issue. This effectively cripples your application, yet is hidden so well from the system as a problem without sniffing or using something like mturoute you’d never know it is happening.

  • MTURoute is your friend and will help you determine your current MTU

    With that said, on any systems with iSCSI connectivity, I strongly encourage you to NOT disable this setting, ensuring that EnablePMTUDiscovery is always set to 1

    Thanks for your time!

  • NetApp supports Server 2008 and Hyper-V instances!

     

    So, as seen here NetApp Expands Storage and Data Management Solutions Supporting Microsoft Windows Server 2008 Physical and Virtual Environments this can mean a lot for environments which want high resiliency, modern systems (Server 2008, Hyper-V) and helping to further consolidate Server sprawl as well as Storage sprawl.

    But what does this mean for you or I?

    Oh, this is where the fun gets started!

    It’s one thing to have supported SnapManager products on the latest Apps:

    • Windows Server 2008
    • SQL Server 2008
    • Exchange Server 2007

    But to also be able to support them instanced within Hyper-V, I have to add ‘coolerific’ to the equation.    What this means is that even I in my lab/sandbox/testbed/Laptop ;) will be able to actually simulate any of these environments as well!

    My testbed happens to be a Lenovo T61P, 4gb of ram, running Server 2008 (Enterprise), Hyper-V enabled, Also running the NetApp Data ONTAP Simulator for local-side simulated (yet real) storage!

    I’ll be able to run rig with all scenarios of apps, dependencies and then replicated it back to my actual real filers, along with the older apps (while not mentioned, but not less important) such as MOSS, Exchange 2003, SQL 2005, and beyond!

    Yea, I think it’s pretty damn cool that the support is there, and gives me something even *I* can take advantage of, let alone large scale enterprises!

    Embedded on-chip SSD delivered over PCIe (Fusion-IO)

    Fusion-IO has released the ioDrive rather recently, and apparently is backordered! oh my!

    This thing looks cool on the surface!

    Although it has tiny sizes of 80, 160 and 320gb, nonetheless the possibilities seem rather cool. This can definitely be a great boundary for high-speed disk for small data-sets.

    Certainly I have initial concerns around the raid-ability of the disk and the potential losses, albeit it does predict protection from moving disk components – Nonetheless if you’ve never experienced solid state disk failures (As I have) you’re likely to find them to be a realistic problem to need to address.

    I’ll personally be watching this one going forward, as they seem to be breaching a boundary of availability and feasibility in the SSD market, especially with the practicality and the sizes of SSD being even easier to deploy (and cheaper!)



    If their product works as well as it is proposing to operate, I don’t imagine they’ll be able to survive on the open market for long without getting snatched up!