For my entire career, the fact that storage is really, really slow has been true. Slow storage results when a typical modern processor executes a hundred million instructions while it waits for a disk arm to move once. That fact is no longer true and there are major implications for all IT.
What’s happened is that manufacturers have attached solid state memory directly to the PCI bus. Solid state eliminates disk arm movement, the PCI bus eliminates the slow disk drive access protocols and enables very fast transfers. You can buy a 2TB storage subsystem capable of 460,000 read IOPS and 175,000 write IOPS off the shelf, deliver in 3 days, for about $4,400. You’d have to put more than a thousand spinning disk drives behind a mammoth controller to get that kind of performance.
Intel Fultondale Solid-State Drives are Incredibly Fast
No More Slow Storage Performance
I was aware of much faster storage performance – my SSD laptop boots five times faster than my old laptop – but hadn’t thought much about it until I read the article “Non-volatile Storage” in ACM. I consider the article a “must-read” and cannot recommend it more highly.
The authors reach three conclusions:
1. The age-old assumption that I/O is slow and computation is fast is no longer true: this invalidates decades of design decisions that are deeply embedded in today's systems.
2. The relative performance of layers in systems has changed by a factor of a thousand times over a very short time: this requires rapid adaptation throughout the systems software stack.
3. Piles of existing enterprise datacenter infrastructure—hardware and software—are about to become useless (or, at least, very inefficient): (this requires) rethinking the compute/storage balance and architecture from the ground up.
- In storage systems: Does it still make sense to do write and read compression? With 100 million instructions between writes you could do a lot of compression. But with 10,000 instructions per write?
- In databases: The old Microsoft recommendation of 28 disk drives per SQL Server to separate transaction logs, indexes, system software, etc. is just silly. And all the performance optimization routines are flat out wrong. Hmm, maybe we don’t need indexing on any of our small databases - a full scan is faster and eliminates all the index maintenance activity, too.
- Operating systems: OS’s have built-in compensation mechanisms for slow drives – lazy writing, piles of threads waiting for storage requests to complete, and so many more. All these become worthless code.
- Virtualization! When one CPU can’t keep up with the local disk drive, why do I want to run multiple OS’s and slow things down even more?
- And so on, and so on.
I wonder what the effect will be on public cloud. Millions of servers to upgrade, telecommunications latency overwhelmed by local transfer speeds, virtualization uncertainty.
If I were bringing in a new application, I’d automatically put the database on one of these drives. Think how simple your storage life is without HBA, fibre channel, LUNs. And no more FCoTR, either.
We’re going to see huge amounts of design decisions revisited, and scads of new products. Don’t worry about your current leases - this will take time. After all, it took more than a decade for server virtualization to penetrate half the datacenter. I’m glad we live in these interesting times!
Interested in learning more? Enter your email address below to subscribe to our blog!