eBay is busy building some of the world's most-efficient data centers, and its efforts aren't just show. The company has figured out a way to tie its computing infrastructure to specific business concerns and plans to continuously tweak its operations to meet top-level mandates. On Tuesday, eBay released a whitepaper describing how it accomplished this and laying out a framework for companies that want to do the same.
While not earth shattering, I suspect that many people who are in the business of managing internet infrastructure have long sought to holistically manage this infrastructure and searched for a set of common best practices. There is ITIL & ITSM which are really GREAT models to start from, yet seem a bit incomplete as per managing, specifically, interne infrastructure – built on the three pillars of Datacenters, Server (including storage), and Networks – as a “business”. Then there is traditional ERP best practices, yet again, ERP is not exactly a perfect fit either as it is too broad – so why pound the square peg thru the round hole then? Why not take the best of all of those three and call it Infrastructure Resource Planning (IRP)?
My own evolution has brought me to where I am in that very function – managing the business aspects of enterprise internet infrastructure – and as such, have begun to plan out the entire lifecycle of that infrastructure from procurement thru end of life/disposition and everything in between…including the very notion of shared services charge back models, which is why IRP is so analogous to ERP but not exactly the same thing.
ERP, as per wikipedia, is defined as the following:
“Enterprise resource planning (ERP) systems integrate internal and external management information across an entire organization—embracing finance/accounting, manufacturing, sales and service,customer relationship management, etc. ERP systems automate this activity with an integrated software application. The purpose of ERP is to facilitate the flow of information between all business functions inside the boundaries of the organization and manage the connections to outside stakeholders.” – overall, this sounds a lot like evolving infrastructure management best practices inside enterprises, yet the twist is that while ERP is generalized to suit all businesses, IRP would emphasize the underlying interent infrastructure that a business runs upon today and manage it accordingly.
ITIL is a GREAT framework to follow for infrastructure mgmt that is predicated upon 30yrs of evolution and primarily focused on IT services, which are traditionally centered around the “desktop”. “ITIL advocates that IT services must be aligned to the needs of the business and underpin the core business processes. It provides guidance to organizations on how to use IT as a tool to facilitate business change, transformation and growth.
The ITIL best practices are currently detailed within five core publications which provide a systematic and professional approach to the management of IT services, enabling organizations to deliver appropriate services and continually ensure they are meeting business goals and delivering benefits.”
However, ITIL doesn’t quite cover the infrastructure optimization of core datacenters, servers, and switches/routers specifically enough, as my own experience has yielded that a key component of IRP that differentiates itself from ITIL is that engineering research and development are requirements to optimize, holistically, the three pillars of internet infrastructure: Datacenters, Servers (including storage), and Networks.
Then, finally, there is ITSM. ITSM is itself a process based framework much like ITIL, however, ITSM is not attributed to any one person or organization (ITIL is trademarked by the UK Cabinet office). “ITSM is generally concerned with the “back office” or operational concerns of information technology management (sometimes known as operations architecture), and not with technology development.” This notion of a management framework, again, is a GREAT start, however, the fact that it is not interested in tech development is where it falls short and IRP picks it up.
The idea of infrastructure R&D is the KEY differentiator of IRP form ITIL, ITSM, and ERP. IRP is very much a large enterprise concept that does not disassociate itself from advancing the underlying technology; it embraces it as a way in which to drive ever more contribution to businesses’ bottom lines.
IRP is relevant today as businesses have scaled and evolved such that we are near a point where applications can be separated from direct relationships with internet infrastructure. From the moment virtualization commenced, we have been trying to allow apps to live anywhere, anytime on a cohesively managed infrastructure. When that infrastructure can be managed as a system onto itself is when IRP becomes relevant.
IRP is not a radical movement nor is it an earth shattering concept. It is a concept however, that is begging to be recognized as we further advance the separation of apps being directly tied to the underlying infrastructure. Then we can plan, manage, and tune that infrastructure as a system which is why technology advancement is a key component of IRP and where ITIL & ITSM fall just a bit short. Thus, IRP as a concept is timely and needed to truly get a hold of your internet infrastructure from supply chain to management and monitoring to refresh cycles to research and development all thru the lens of optimizing the underlying business value proposition.
PS – stay tuned as there is an emerging metric which will be able to tie IRP all together – it will be made public in the next month.
Ever since the Internet took off, those who have been managing the infrastructure have been laser focused on optimizing each and every part of the “stack” leading up to and stopping short of the application (another post on this one as that is about to change!!!). In so doing this, the largest of players have tuned their datacenter energy consumption thru quantification via Power Utilization Effectiveness/Efficiency (PUE), have sought to push the HW vendors to tune the servers we deploy in the datacenters via efforts like Open Compute Project (just attended recent event), and the industry has more robust contribution to the ever evolving set(s) of industry mgmt standards that are common across MOST infrastructure teams (see ITILv3.0).
What is not, however, all that common is an over-arching view (process map) to tie this universe from end to end – DCIM adoption has been slow due to not only this lack of cohesive visibility but also to advance our abilities to even further tune the “infrastructure engine”, we need to begin to holistically look at infrastructure the way in which manufactures have long done via Enterprise Resource Planning – I posit, we need to adopt Infrastructure Resource Planning (IRP) as a way in which to advance the concepts of ITIL + ITSM + DCIM + (soon to be published, new metric!!).
When I founded my own startup, I thought I knew what I was getting into: after all, I had been a senior manager at eBay (s EBAY), managing director at Gumtree and chief operating officer at Zoopla. But since starting Adzuna in 2011, I've realized that being a founder can be very, very different. Here are some things I've learned in the past two years.
I’d been holding out a bit on writing this as it really is a synthesis of ideas (aren’t they all) with special mention of dialogue with Jeffrey Papen of Peak Hosting (www.peakwebhosting.com)…
I’ve been collaborating and speaking extensively with Jeffrey on the next phase of “hosting” since we are now moving beyond the hype cycle of “Cloud Computing” (see previous post on “The end of the Cloud Era”). The community at large (and people in general) love the idea of simple, bite sized “solutions” with pithy and “sexy” naming conventions (think <30sec sound bites) and that was the promise/expectation around “the cloud” as it was popularized – a magic all in one solution whereby you just add applications and the “cloud” will do the rest. Yet, the promise never quite met expectations as the “cloud” really ended up being an open standards evolution of “virtualization” – nothing wrong with that, just not the “all in one” solution that people really wanted the cloud to be (ps – all in one refers to the aforementioned of applications just being pushed thru APIs to the “cloud” and the “cloud” manages all underlying resources).
So, as the Cloud Hype dissipates (love the metaphor), we are sorta back to the same basic elements that make up Infrastructure – Datacetners, Compute (IT), Communications (switches/routers), Software that manages it all (virtualization, cloud, etc), all accessible thru the to be built APIs. Put another way, we are coming full circle and back to centralized, on-demand computing that needs one more element to make it all work – Subject Matter Experts (SMEs).
I was inspired to write this today when I saw this post from Hitachi: http://www.computerworld.com/s/article/9226920/Hitachi_launches_all_in_one_data_center_service - “Japanese conglomerate Hitachi on Monday launched a new data center business that includes everything from planning to construction to IT support. Hitachi said its new “GNEXT Facility & IT Management Service” will cover consulting on environmental and security issues, procurement and installation of power, cooling and security systems, and ongoing hardware maintenance. It will expand to include outsourcing services for software engineers and support for clearing regulatory hurdles and certifications.” This is the comprehensive “build to suit” solutions the market has been seeking since the cloud – it includes everything to get your infrastructure building blocks right and is provided as a service – but what do we call this service????
How about “Operations-as-a-Service“!!
OaaS pulls together the elements in IaaS + PaaS + SMEs. It outsources the “plumbing” to those that can make it far more cost effective thru economies of scale. Sure, there are a select few companies who will do this all in house: Google, eBay, Microsoft, Amazon, Apple (trying), and of course, Zynga. Yet, these companies are at such massive scale that it makes sense – and yet, they even have excess (at least they should) capacity which is why AWS was born in the first place and we are now seeing Zynga open up to allow gamers to use their platform (see: http://www.pocketgamer.biz/r/PG.Biz/Zynga+news/news.asp?c=38455). Yet these are the exceptions and not the rule.
The rest of the world should and is seeking comprehensive, end-to-end Operations as a Service provided by single vendors. It doesn’t preclude the market place from buying discreet parts of OaaS individually, however, the dominant companies that will begin to emerge in this next decade will seek to add more and more of the OaaS solutions set to their product list thereby catalyzing a lot (I mean a lot) of consolidation.
I will be following up this blog with a more detailed look at how this concept is playing out, yet in the mean time would very much like to hear the feed back on this topic – is the world looking for OaaS?
Simply put – WOW!! – Switch (http://switchlv.com/pages/home.php) Las Vegas is the datacenter that a lot of people have never heard of yet is the best designed datacenter I have had the pleasure of touring ever. (this is not a paid endorsement! )
Ok, ok, ok – I hear you – how can this be the greatest designed datacenter and I’ve never heard of it – well, simple, it is because it is so very successful, it hasn’t had a need to broadcast its name out there as it seems that nearly all of its business has come thru word of mouth – and the client list is a veritable “who’s who” of behemoth companies, both public and private, that have chosen to colocate here (it is really an impressive list).
What struck me right off the bat was simply how immaculate everything is (and huge – think a near 100MW facility) which led to the amazing attention to detail on everything (to the toliets in the rest rooms). I was lucky to get a tour with my friend, Mark Thiele, who has been with Switch for the past year.
Let’s get to the good stuff – everything (and I mean everything) is the brain child of founder/CEO, Rob Roy, who has over 125 patents (http://switchlv.com/pages/rob-roy-originals.php) pertaining to his T-scif (heat containment cabinet PODS), living data center (Automatic building adjustment management system), TSC 600′s (Quad system large scale HVAC modules) as well as several dozen other data center system concepts. What the industry as a whole is just realizing is a “best practice”, Rob has been doing since 2004.
Heat containment/expulsion, direct/indirect evaporative cooling, refrigerant DX, intuitive infrastrucutre (living datacenter) that spot adjusts temps by rack, security that compares to that of a nuclear power plant, redundancy that will make a Tier IV datacenter jealous, and even a theater on site to host groups (the seats were even designed by Rob) – when it comes to the best datacenter you can imagine or have thought of, you can see it live at Switch. (PS – annualized avg PUE below 1.2 and days well below 1.1 – measured from utility meter counting everything!!!)
I was equally impressed with the lean organization that supports this second to none facility – from the security staff (which really is on par with a Nuke plant – no joke) to the senior staff, everyone was a really warm and genuine person and you could tell, really liked to go to work every day (and that is at a datacenter mind you!!!). It all stems from the top – Rob is, from the get go, just a warm, friendly and, get this, humble person. He puts you at ease the moment you walk into his office and his office belays the kid still inside. Rob surrounds himself with like minded people and that is a recipe for success.
Switch’s client list is just the icing on the cake – it is really a veritable who’s who of major, major global entities including the US Federal Govt. All this while spending a near pittance on marketing – why you ask, ’cause the facility and its staff truly sell itself – I can’t imagine anyone who has toured this facility not wanting to have their gear there, even if it is to access the theater room!!
So what is in store for Switch – quite simply, whatever they want. Privately held, exceptionally profitable, and with a patent portfolio that might make Microsoft or Google jealous, the company can write its own ticket – not often you find that in the tech sector. A very huge congratulations to Rob (thnx Mark) and team for building such a great company and designing the world’s best datacenter that I have ever seen!!
Trying to be aware of technology trends over time as they can provide insights into future trends, I have always thought that since the dawn of computing (mid 1940′s) and the fragmentation that occurred as a result of personal computer(1980′s) , we have been taking steps to go back to what was once called “centralized computing” or “time sharing of computing”. The latest installment of this iterative process has been dubbed “cloud” computing and we are on the back side of this marketing buzz word.
“Cloud computing” has had its 4ish year run as the latest and greatest buzz word in computing – it has spawned conferences and its own Open Source efforts such as Eucalyptus (2008) and/or Open Stack. Prior to cloud, it was “grid computing” in the mid 2000′s, prior to that “client sever models” were prevelant with hints of “thin clients” running around, prior to that we were just being introduced to Windows ’98 (remember that!!) – amazing to take into consideration how much has changed over the last decade.
Now we sit, after 4years of development and market acceptance, with all sorts of off shoots of the term “cloud” – we have Private, Public, Hybrid, and even Community clouds. All of which beg the question, just what does the term mean any longer?
On top of there being now a fragmentation of the term, we also have a number of real world, high profile moves away from the original and most popular of the clouds, Amazon Web Services (AWS) such as Zynga’s move to their own “build to suit” infrastructure, signaling that “one size does not fit all”.
Taken into consideration the evolution of computing, we are still on the path to “computing as a utility” and cloud has represented a step in that universally sought direction, however, I think we are going to see some new terminology emerge to describe the next step – what is that next step you ask?
In a perfect world, application developers will simply design their apps to APIs which in turn leverage the underlying infrastructure which includes the same things since computers were invented: Compute resources, Memory, Storage, and networking (keep an eye on OpenFlow as routing moves towards software). In parlance that had a short lifespan, but still exits, this was broken out with Infrastructure as a Service (IaaS), Platfrom as a Service (Paas), Application as a Service (AaaS – don’t blame me for this last one!!), where by the APIs live in the PaaS portion of this consolidated ”stack”.
Again, in the world we seek to develop, the IaaS portion will have become “self aware” as all things associated with providing those resources will be connected and this includes everything from “power to packet”: data center infrastructure: HVAC, Elec, temps, etc., computing resources, etc., such that IaaS can determine the optimal resource configurations for any application at any given point of time – where it is stored, how it is stored, how it interacts with end users and from where, as well as, optimal business drivers like $/kW/hr, Watts/Transaction, and transaction/gross margin $s. Are we there yet, not quite.
As outlined, the “cloud” has had a great many benefits to the industry at large as it has provided an agreed upon terminology (although it isn’t perfect), it has centralized the goals of Infrastructure Engineers to move towards the paradigms described above of self-aware, programmable Infrastructure, and has, at this point, been tested enough to realize that not all applications are the same and, thus, how they leverage Infrastructure is slightly different per App.
So while cloud has had its “15 minutes” the industry at large is moving on and forward towards the ultimate goal of “smart infrastructure” – I posit that this is now possible due to the move to IPv6, however, that is a post onto itself.
The “Cloud Era” is in our rear view mirrors and we are in the midst of a “marketing speak transition” seeking the naming convention of this new Era. I think IaaS, PaaS, and AaaS are pretty good proxies for what i is to come, however, I’ve been playing with derivatives of “build to suit” or “bespoke infrastructure” or “dynamic infrastructure” or just “mixed infrastructure” where the underlying building blocks are the same and simply fine tuned to the apps needs (ps – this is where the APIs will play an every increasingly important role in this new era).
Please let me know if you’ve thought of any other terms that speak to what we are living now.