Automatic provisioning is a given in so many situations today, yet deploying it is still extremely hard to do. When I asked my IT buddy, Jonathan Eunice of Illuminata, why this is the case in 2011, he gave me two reasons, the first of which is variation in application environments.
He said that even before datacenters had to run at Internet scale, applications were being deployed on such a wide range of platforms and systems that the variables involved made automating them almost impossible. He had read some studies of different datacenters trying to consolidate back then, and in one case, the data center used 100 different versions of six separate operating systems -- and one of the Unix ones had 49 different patch levels alone!
Moreover, at least in the old days, every application had its own server with a custom operating system and custom drivers. "You would lovingly configure the whole thing, but then the box right next to it ran an app on a different OS or different patch level on a slightly updated piece of hardware," Jonathan said.
And until recently at least, applications and infrastructures were not set up to be dynamically flexible. "If you want applications to be automatically provisioned or orchestrated in any dimension, you have to design them to be so. They have to run in an environment where they understand what the current application load is and the policies to apply more resources are, and you have to make that shift in the way everything works," Jonathan said.
"I liken it to when automatic transmissions came in. You can't manually shift an automatic transmission very easily. You have to expect that it will do it for you and plan your drive a little differently. You can't engine brake like you used to."
Stay tuned for problem #2: Crappy Configuration Management...