Last week was my first Gartner Data Center conference. About 3000 people gathered to compare notes and learn from one another. Know this: you are not alone in feeling overwhelmed by the ever-growing pile of information-age data.
I’ll run through 10 top take-aways between today and tomorrow:
1) Who was there?
Public sector techies outnumbered other industries, but all the usual suspects were represented, including manufacturing, finance, entertainment, healthcare, etc. I hadn’t realized how international the audience would be, too.
2) What was the hottest topic?
Disasters. I attended a dozen sessions, and hardly any failed to mention Superstorm Sandy. Plus the Gartner team added a session on lessons learned from Sandy. While the economic effects of Sandy are still being tallied, the Economist recently reported that Atlantic City’s casinos lost $15M per day during a previous storm’s closure. Yet many people I talked to weren’t sure how to calculate what downtime would cost them. Which makes me think many IT professionals are NOT getting proper credit for the downtime they’re preventing, and that’s a shame. Especially since the average large enterprise experiences 16 outages per year, costing them $300K per incident. If you beat that, your leadership should give your team credit for contributing $300K per averted incident to your company’s bottom line.
3) What’s the cloud for?
Apparently the majority of folks in attendance at #GartnerDC were over the supposed cost savings of cloud computing. Most of the talk about cloud had to do with agility, not cost savings. They want to be able to launch new products and services lickety-split. Burst up in capacity as needed. And nobody wants to be seen as the luddite who installed last-millennium infrastructure. Warning: do not play drinking games at Gartner events, especially if your card has the word “cloud” on it. I’m sure Vegas has more than its fair share of alcohol poisoning already.
4) Just how complex is this going to get?
Complexity was a relentless topic. I’m a numbers gal, so I loved how Gartner analysts gave hard numbers with regard to the growth in complexity that we all live day-to-day. For instance, they said there are 115 settings when you configure Exchange with VMWare. How many possible permutations is that? 2.925e+188. A major point was that – if you don’t have time to understand the complexity you face – you’re still held accountable for what can go wrong if you don’t get all the settings right. And then people wonder why outages happen, and why it takes time to figure them out and fix them.
5) How productive are you expected to be?
A statistic given as typical for an enterprise was 100 servers managed by 1 full-time person in a mid-large sized data center. But many people in the audience rolled their eyes at that, and commented afterward that their management would laugh at them if they suggested this level of staffing – they’re already expected to do much more than that. Gartner analysts explained that the largest cloud providers, like Google and Facebook, get 10,000 servers managed per person. But they lock down their complexity. Standardizing and automating, and getting very precise about each single-purpose workload. I’m guessing they don’t skimp on the instrumentation needed to keep it all running.