Something big you should not ignore

In my new role at Cisco I’ve had the opportunity to observe and study something happening that is (in my opinion) truly significant and mind blowing. The IT data center landscape as we know it is on the precipice of a major upheaval. I’m not talking about virtualization, and cloud, and all that other stuff that’s been obvious in enterprise IT for the last five years..

I’m talking about the trickle down effect of distributed systems into the enterprise IT data center.

Distributed computing is not something new. The ideas and implementation of these concepts date back to the 1960’s. It’s how distributed systems have evolved only recently (in the internet era) that is interesting and profound. Starting with the publication of Google’s GFS paper in 2003, Google’s Map Reduce in 2004.

In the early 2000’s large internet properties such as Google and Yahoo! were faced with problems nobody had ever dealt with before. They had to scale their application to a population of millions, store and analyze tremendous amounts of data, all while providing a consistently responsive and quality user experience. Google’s publication of GFS and Map Reduce began the conversation of how to solve these very unique problems using an intelligent software infrastructure managing a warehouse sized distributed system of standard low cost x86 rack servers with local disk. No big expensive SAN. Instead, there is a software layer between the server hardware and application that pools all of the compute, network, and storage into one abstracted logical resource.

During this time the enterprise IT data center was not faced with any of Google’s problems, nor was there any anticipation it ever would. Google published their papers and nobody outside of academia and other growing internet properties took notice. Understandably so, because scaling an enterprise IT app to millions of users and petabytes of data would be considered quite unusual (even still today). Rather, enterprise IT was focused on problems of complex management and inefficiency. Life went on and enterprise IT continued down a path of server virtualization and private cloud, implementing blade server technologies and catapulting the rise of new stars such as VMware.

Meanwhile, the problems solved by Google in scaling internet applications brought rise to new internet properties such as Facebook, Amazon Web Services, and many others. Each one taking Google’s original ideas and improving upon or customizing them for their own applications needs and publishing additional papers. Amazon published Dynamo, describing a distributed system which provides a massive amount of object storage powering their S3 offering. In 2008, Facebook open sources Cassandra, a highly scalable transaction oriented distributed storage system powering parts of their social media application (chats and messages). Yahoo! engineers develop and open source Hadoop, a distributed system which provides analytics over large data sets. These are just a few examples of many.

It would appear the two worlds of Enterprise IT and Web didn’t have much in common. One was trying to solve the scale and cost problems associated with massive amounts of users and data for their one or two apps. The other trying to solve the problems of infrastructure inefficiency and complex management for their numerous but relatively smaller scale apps. And, naturally, each taking two different approaches to their problems. Enterprises sought after infrastructure consolidation and virtualization solutions provided to them by capable vendors, while the web application providers had no choice but solve their problems on their own, using self developed infrastructure software with an army of in-house software engineers. Two parallel worlds of data center IT. Neither world having any influence or effect on the other.

This is the point at which most people understand the industry, as I did, until I started paying closer attention.

As properties such as Yahoo!, Google, Facebook, Amazon became great successes, their architects and software engineers realized that they had moved mountains. Accomplishing the unthinkable. The tremendous problems of efficiently running large scale applications on low cost infrastructure had been solved. Publishing papers about your work was the perfect way to claim well deserved credit and establish name recognition in the industry. At the very same time, enterprise IT begins to encounter some of the very same problems solved by the large web provider, such as scalable data warehousing and analytics (so called “Big Data”). Additionally, the software driven distributed systems that solve problems of infrastructure efficiency and management at very large scale could also be applied to infrastructure at a smaller enterprise IT scale (why not?). And finally, the cost savings of an application infrastructure designed to operate on low cost commodity hardware can be realized at any scale, large web or enterprise IT.

Problem: The average enterprise IT shop doesn’t have the same army of in-house software engineers to stand these systems up to be production ready with any kind of speed or operational efficiency.

Solution: A new business opportunity has presented itself. New problems are arising in enterprise IT that have already been solved by the large web properties. And (potentially) a new way for enterprise IT to solve the same old problems with lower infrastructure costs. Why not take these very smart distributed software engineers and put them into start-up companies with the mission of delivering distributed systems for commercial consumption?

Here are just a few examples of start-ups targeting the “Big Data” space:

Cloudera & Hortonworks - made up of former Facebook and Yahoo! engineers each provide optimal packaging, training, support, and consulting services for Hadoop.

Datastax - provides packaging, consulting services, and training for Hadoop and Cassandra.

MapR - provides optimal packaging and support for Hadoop.

You can begin to see how distributed systems technology originally developed by Google, Yahoo!, Facebook, etc. will begin to trickle down into pockets of the enterprise IT data center with the help of these start-up companies. In fact, this is already happening. Enterprises are staring to deploy “Clusters” of rack mount servers and network gear for sole purpose of “Big Data” analytics and data warehousing. These big data pods might snap-in to the rest of the general purpose infrastructure with a clean Layer 3 hand-off. Once these big data clusters are in place, the enterprise has a chance to gain familiarity and expertise in the general framework of a distributed computing architecture, opening the door for other existing application environments to begin leveraging this technology.

Today, Big Data is a relatively new problem for enterprise IT and therefore tends to be deployed as a new application environment. The existing server virtualization infrastructure (VMware, NetApp, EMC, etc.) supporting all of the other traditional applications remains untouched and unchanged. For now.

Even more interesting is that we are beginning to see distributed systems and open source software based technologies enter the traditional general purpose server virtualization and private cloud environment. Consider that a server virtualization deployment often requires a large centrialized storage system, often provided by a storage vendor like EMC or NetApp. Can the very same technology that drives “Big Data” also be used to provide the storage infrastructure for server virtualization? Why not? That’s what the folks at Nutanix set out to accomplish, and they appear to have been successful. Their solution which claims to eliminate the need for a SAN or NAS for server virtualization has been made generally available.

With distributed computing one thing remains certain: You need servers, you need a network, and you need software to manage the infrastructure. Most importantly though, to be relevant in this space as a customer or a vendor, you need to understand the application.

I’m spending a lot of time studying the application framework and operational model of things like Hadoop, Cassandra, and OpenStack, and how that translates into current and future software and hardware infrastructure considerations, as well as possible turn-key solutions moving forward. If these technologies gain a real foothold and you don’t understand how these applications work, you run the risk of loosing relevance and credibility whether you’re an engineer in the enterprise IT data center or a vendor trying to sell switches and servers. You can run these apps as virtual machines on your desktop, or in a lab with a handful of rack mount servers and a switch. All of the documentation to learn is out there and readily available.

Here are some places to start: Cloudera training, certification, and videosApache Hadoop DocumentationOpenStack documentation

Stay tuned here for more information and discussion on this topic.

Cheers,
Brad


Disclaimer: The author is an employee of Cisco Systems, Inc. However, the views and opinions expressed by the author do not necessarily represent those of Cisco Systems, Inc. The author is not an official media spokesperson for Cisco Systems, Inc.