Saturday, March 13, 2010

Risky Business

When I look back over my career in IT, I realize that I’ve been involved in sales long before I decided to join Compellent. Every strategic initiative I pushed for over the years has involved a great deal of salesmanship, evangelization and consensus building. Technologies that are common place today got there, in large part, because someone on the front lines identified a trend, thought about how it could help them drive value for the business and started putting a case together for adopting the new technology in their own data center.

I recall years ago, as an IT administrator for a regional bank, suggesting that we drop our Token-Ring network topology for new branch office rollouts in favor of Ethernet. Seems like a no brainer now but at the time there was a lot of concern about the risk associated with making a directional change like that. The concerns were typical (and justified) of any shop faced with the prospect of doing things in a new way. Will our applications behave differently? Are there hidden costs associated with the change? How will support be impacted? Will this introduce complexity?

No Escape
Change is a difficult and necessary part of IT life, and it carries risk. However, there is no escape from risk because NOT adapting and changing also carries risk. Sometimes changes are forced upon you by other entities (government regulations), circumstances (mergers and acquisitions) or drivers (remember Y2K?) beyond your control.

Managing risks due to change on a tactical level involves policies and procedures to establish controls and checkpoints as well as contingency plans. On a strategic level, I think the best way to reduce risk is through simplicity.

Avoid Complexity
One thing I quickly learned as an IT manager was that complexity is your enemy – always. Complexity, in my opinion, is the offspring of laziness and the cousin of carelessness. The more complex your environment, the more difficult and costly to manage and adapt to changes which means that you have one big ass cloud of risk hanging over your head.

The opposite of complexity in the data center is elegance. A solution that is simple to understand, manage and maintain and is effective at lowering costs and delivering service is an elegant solution. Compellent’s Fluid Data Architecture is one such elegant solution and I know this because every time I do a demo for a prospective customer they light up – they understand how elegant our solution is.

Spare Some Change?
On March 24th from 2-3PM CT you’ll have an opportunity to chat with one of our customers, Ben Higginbotham of I’ll moderate a Twitter chat with Ben on the topic of change and risk in IT. Here are the details. I hope you’ll join and ask questions or share your own success or lessons learned.

Tuesday, March 2, 2010


Early in the last decade if you were making a storage purchasing decision you likely would have been frustrated with sales presentations, analyst reviews and industry news about storage virtualization to the point that you’d rather purchase ANY product that didn’t have this mysterious capability. Of course, the uncertainty continues today, although the hype has died down. Unfortunately, another buzz phrase has cropped up to befuddle the marketplace – automated storage tiering (ATS).

I’m watching this unfold as I transition from end-user to peddler of shared data storage and there’s clear indication from several recent blog posts and tweet threads that ATS, as an idea, is getting abused much like storage virtualization has been for years.


Why is this happening? If you consider that all suppliers sell to their strengths then it’s no wonder this happens. You’ll generally have a leader or two who develop a conceptual feature into a real working product and start to pull in some mindshare (and hopefully for them, market share as well). When a feature or function starts to gain traction, you’ll find that folks on the supply side will generally fall into one of two camps; “We have that too” or “You don’t want that” and the debate rages from there.

The root of confusion lays within the “Me too!” crowd because, well, they may not actually HAVE it, but they have something close enough that they can fudge a little and get away with it. This feeds the “You don’t need it” side with the fuel of uncertainty with which they’ll try to capitalize on customer frustration.


A big part of the confusion around ATS is around the role of SSD as a tier of storage. Since SSD acts like disk but performs like traditional storage cache it doesn’t fit neatly into either category. For example, many (possibly all, I don’t know for certain) disk array systems will by-pass write cache for SSD bound blocks.

Does that now make SSD cache? Well, not according to the SNIA Technology Council’s storage dictionary. Cache is both temporary and performance enhancing. While SSD certainly improves performance, it is arguably not temporary storage.

Bottom line is that SSD can be used as a healthy part of your ATS solution. And it’s easy to see that eventually it will be a big part with traditional enterprise disk being squeezed out by SSD on the high end and big slow SATA/SAS disk on the low end. Who knows, maybe it will all be SSD at some point? Or bubble memory? Or quantum dots?

The point is, don’t let confusion around the future face of tiered storage scare you from adopting ATS today because you can reap real benefits here and now. Just make sure you’re choosing an architecture which will accommodate the changing landscape and you’ll be fine.