Note to our readers:  It’s exciting to finally be blogging with the launch of the new website!  The nature of these posts will certainly change as we hit our stride with communicating ideas, thoughts, approaches, and lessons learned to our readers.  We look forward to connecting with our readers through these resources (blogs, tweets, whitepapers) and hope you will provide feedback.  Now on to my first blog post of the new site…

It’s no surprise that technology continues to grow, evolve and become more sophisticated. Networks sprawl, systems get virtualized, access goes mobile, and data flows to all corners of the enterprise.  To help handle these changes we implement newer, faster, and more sophisticated tools to help prevent information from running amok. The problem that concerns me is that in many ways the coolest and newest tools are also the hardest to really understand or take full advantage of.

Let’s take the network as a specific example.  Connectivity in its most basic form involves the transfer of frames between two networked devices.  One device wants to send information so it creates a frame and sends it across some type of physical media (copper, glass, air) to a destination host on the same subnet. Simple.  However to really understand even this fundamental building block, we need to at least be versed in the OSI model, ARP, DHCP, and basic frame transmission.

Getting a touch more complex, we can add routing into the mix and move up to L3 concepts (subnet masks, default gateways, VLANs, etc).  Want to interconnect multiple layer 3 subnets?  Now we might start hitting routing protocols, connecting different sites, and begin to really need to understand how information is flowing between equipment. But still, these are just the basics of networking.

To add more complexity, as we look forward to the evolution of moving electronic information from one place to another we see the looming paradigm of the software defined datacenter.  We cast away much of the standalone hardware and leverage massive, disparate cloud computing capabilities to virtualized systems, applications, and full complex networks… if our IT staff don’t really understand the basics, how can they even begin to comprehend the new model?  Abstracted networks, decentralized storage, virtual interfaces, East-West firewalling, L2 filtering, and a host of other ideas that take those fundamental network ideas and multiply them 1000 fold.

Sure, diving into new technology is sexy, fun, cool and interesting.  It certainly sounds more interesting than basic network engineering, but the more I think about complexity – the more I strive for simplicity.  Nail down the basics first.  Once you really grasp the fundamentals and the how a technology works, it will be easy to grow that knowledge into more complex (and interesting!) implementations.  Jumping right into the new without an understanding of the fundamentals will be fraught with problems.

So it’s back to basics for me.  I need to review some fundamental identity management concepts before I start developing a smart authentication strategy for a client’s sprawling BYOD problem…