There were some interesting tweets that I mostly stayed away from today regarding the benefits of Compellent's Fast Track performance enhancement feature. I'll let that particular dog be and await a clarification from the marketing folks at Compellent. However, I do want to take the opportunity to talk about Fast Track and explain what it is - and what it is not (all in my humble opinion, of course).
Some background on short stroking. Short stroking is a performance enhancement technique that improves disk IO response time by restricting data placement to the outer tracks of the disk drive platters.
This technique has been around for a long time and it's pretty easy to do (format and partition only a fraction - say 30% - of the disk capacity). The trade off, obviously, is that your storage cost will have increased. You could partition the remainder of the disk and use that for data storage as well, but this could put you back into a wildly swinging actuator arm situation which is precisely what you're trying to get away from.
Enter Fast Track. As a feature of Compellent's Fluid Data Architecture it provides the benefits of having your read/write IO activity confined to the "sweet spot" of each and every disk in the system while placing less frequently accessed blocks in the disk tracks that would normally not be used in a traditional short stroking setup. It's like cheating death. OK, maybe not that good but it's certainly got benefits over plain old short stroking.
If you're familiar with Compellent's Data Progression feature, this is really just an extension of that block management mechanism. Consider that the most actively accessed blocks are generally the newest. So, if we assume that a freshly written block is likely to be read from again very soon it's a good bet that placing it in an outer track of a given disk with other active blocks will reduce actuator movement. Likewise, a block or set of blocks that haven't been out on the dance floor for a few songs probably won't be asked to boogie anytime soon or at least not frequently - so we can push those to the back row with the other 80% of the population. It may take a few milliseconds to retrieve those relatively inactive blocks but an occasional blip isn't likely to transfer to application performance problems. And this block placement is analyzed and optimized during each Data Progression cycle, so unlike short stroking, you're not sticking stale data in the best disk real estate.
So, in reality, Fast Track is an optimization feature which provides an overall performance boost without sacrificing storage capacity. Comparisons to short stroking help explain the benefits of Fast Track but it's really much more than that. Obviously, short stroking still provides you with the best guaranteed performance since you're removing variables that we have to live with in a contentious shared storage world. But that's a wholly different issue - I've never advocated shared storage (using any product, mind you) as a way to increase disk performance. Fast Track delivers a legacy performance tweaking concept into the shared storage economy without increasing administration complexity.