There were some interesting tweets that I mostly stayed away from today regarding the benefits of Compellent's Fast Track performance enhancement feature. I'll let that particular dog be and await a clarification from the marketing folks at Compellent. However, I do want to take the opportunity to talk about Fast Track and explain what it is - and what it is not (all in my humble opinion, of course).
Some background on short stroking. Short stroking is a performance enhancement technique that improves disk IO response time by restricting data placement to the outer tracks of the disk drive platters.
This technique has been around for a long time and it's pretty easy to do (format and partition only a fraction - say 30% - of the disk capacity). The trade off, obviously, is that your storage cost will have increased. You could partition the remainder of the disk and use that for data storage as well, but this could put you back into a wildly swinging actuator arm situation which is precisely what you're trying to get away from.
Enter Fast Track. As a feature of Compellent's Fluid Data Architecture it provides the benefits of having your read/write IO activity confined to the "sweet spot" of each and every disk in the system while placing less frequently accessed blocks in the disk tracks that would normally not be used in a traditional short stroking setup. It's like cheating death. OK, maybe not that good but it's certainly got benefits over plain old short stroking.
If you're familiar with Compellent's Data Progression feature, this is really just an extension of that block management mechanism. Consider that the most actively accessed blocks are generally the newest. So, if we assume that a freshly written block is likely to be read from again very soon it's a good bet that placing it in an outer track of a given disk with other active blocks will reduce actuator movement. Likewise, a block or set of blocks that haven't been out on the dance floor for a few songs probably won't be asked to boogie anytime soon or at least not frequently - so we can push those to the back row with the other 80% of the population. It may take a few milliseconds to retrieve those relatively inactive blocks but an occasional blip isn't likely to transfer to application performance problems. And this block placement is analyzed and optimized during each Data Progression cycle, so unlike short stroking, you're not sticking stale data in the best disk real estate.
So, in reality, Fast Track is an optimization feature which provides an overall performance boost without sacrificing storage capacity. Comparisons to short stroking help explain the benefits of Fast Track but it's really much more than that. Obviously, short stroking still provides you with the best guaranteed performance since you're removing variables that we have to live with in a contentious shared storage world. But that's a wholly different issue - I've never advocated shared storage (using any product, mind you) as a way to increase disk performance. Fast Track delivers a legacy performance tweaking concept into the shared storage economy without increasing administration complexity.
Friday, July 16, 2010
Friday, June 25, 2010
The Games We Play - How Console Games Are Like Integrated Stacks
In response to Chuck Hollis and his views on integrated versus differentiated stack infrastructure, I was most interested in his example case of building your own PC to make the point that integrated stacks will win out.
I’m not going to prognosticate – that’s for the industry giants like Chuck, Chris Mellor and others to debate. In the end, it doesn’t matter to me (and it’s one reason I moved to sales – there’s always something to sell and someone to buy it). But if you run a data center or are responsible for IT in your organization, it should.
Chuck’s example using PC technology is fine, if you don’t consider the application and the desired functionality. In my case, I build my own PC’s primarily with gaming in mind – I’ve done this for many years. It’s interesting that despite console (think XBOX 360) gaming has outstripped PC game sales for a good long while now, but even with all the benefits of what you could call an “integrated gaming stack” we still see PC gaming (or, the best-of-breed, differentiated stack) still hanging in there. Possibly making a few dying gasps for air even.
The parallels with enterprise concerns are interesting and I think we can draw some conclusions based on what’s going on in gaming currently.
First, let’s examine why the integrated gaming stack has been so popular and all but crushed legacy PC gaming.
You can probably think of others, including the all important “cool factor” of something new and different (I’m picturing Eric Cartman waiting for the Wii to be GA). Don’t discount the cool factor for enterprise decisions – everyone’s always checking the other guy out to see what they’re up to and how they’re doing it.
So, the integrated gaming stack looks like a clear win, right? Maybe. Consider the following:
Welcome to console gaming a la the integrated gaming stack – give us your credit card number, sit back and ignore the sucking sound emanating from your checking account.
So, in the end you haven’t reduced your cost – you just transferred that cost to a (gaming) cloud provider. Depending on how much you game and what features you require, you could actually see increased costs. Hopefully not, but keep your eye on the ball.
I agree with Chuck’s statement that “both perspectives are right” but I don’t see the value going wholesale up the stack as his example indicates. Smart and strategic IT leaders are going to need to make sure that the integrated stack is really, honestly delivering on the value promise.
I’m not going to prognosticate – that’s for the industry giants like Chuck, Chris Mellor and others to debate. In the end, it doesn’t matter to me (and it’s one reason I moved to sales – there’s always something to sell and someone to buy it). But if you run a data center or are responsible for IT in your organization, it should.
Chuck’s example using PC technology is fine, if you don’t consider the application and the desired functionality. In my case, I build my own PC’s primarily with gaming in mind – I’ve done this for many years. It’s interesting that despite console (think XBOX 360) gaming has outstripped PC game sales for a good long while now, but even with all the benefits of what you could call an “integrated gaming stack” we still see PC gaming (or, the best-of-breed, differentiated stack) still hanging in there. Possibly making a few dying gasps for air even.
The parallels with enterprise concerns are interesting and I think we can draw some conclusions based on what’s going on in gaming currently.
First, let’s examine why the integrated gaming stack has been so popular and all but crushed legacy PC gaming.
- Lower startup costs
- Guaranteed compatibility
- Ease of use / single interface
- Integrates with other home entertainment technology
You can probably think of others, including the all important “cool factor” of something new and different (I’m picturing Eric Cartman waiting for the Wii to be GA). Don’t discount the cool factor for enterprise decisions – everyone’s always checking the other guy out to see what they’re up to and how they’re doing it.
So, the integrated gaming stack looks like a clear win, right? Maybe. Consider the following:
- Substandard graphics, storage and processing capability versus PC gaming. I think we can all agree that giving up the pain associated with building your own gaming rig results in you having to accept some lower standards. While you have only yourself to blame for not researching your component technologies before hand, in a console world you are dealing with the best of the mediocre or what I’ve heard some industry big wigs call “good enough” technology.
- Technology lock-in, unless you hack your console (but then you forfeit ease of use, right?) In the PC world, I can buy, sell, trade (EULA permitting of course) games with all players. In the console world, I’m stuck if I buy the wrong stack – maybe not a big deal for gaming, but think about the enterprise that runs “XBOX 360” stacks and wants to merge with a company running on “PS3” applications… oh, some lucky “stack integrator of integrated stacks” stands to make some nice coin for the conversion.
- Console games SHOULD theoretically be less expensive than PC editions because of the platform compatibility (in other words, consoles are all using the same hardware and drivers, while PC owners can choose from virtually limitless combinations of video, processor and input devices). However, console games are priced the same (and a couple of years ago were even about $10 more). Who’s benefiting from compatibility – the stack provider or the customer? (There’s another interesting situation going on with ebooks which should be cheaper to publish but somehow that savings isn’t being transferred to readers or authors).
- Finally, the hidden costs of console gaming are rarely considered because they show up as a gradual tax rather than an upfront cost. Want to play online with your friends? That’s going to cost you extra. Want downloadable content for value added play? Buy some credits. Don’t forget specialized input devices for Rockband or other interactive games (which are very limited in selection, heavily licensed and typically of low quality).
Welcome to console gaming a la the integrated gaming stack – give us your credit card number, sit back and ignore the sucking sound emanating from your checking account.
So, in the end you haven’t reduced your cost – you just transferred that cost to a (gaming) cloud provider. Depending on how much you game and what features you require, you could actually see increased costs. Hopefully not, but keep your eye on the ball.
I agree with Chuck’s statement that “both perspectives are right” but I don’t see the value going wholesale up the stack as his example indicates. Smart and strategic IT leaders are going to need to make sure that the integrated stack is really, honestly delivering on the value promise.
Wednesday, May 26, 2010
SSD at Home
This past weekend I installed a new Intel X25-M 80GB solid state drive into my home PC (which I use for work and play).
I had no end of fun clearing out my 1TB disk formatted as my C: drive (and “system reserve” partition) by moving my documents and certain key applications to another partition in the system. After I was done playing the “sliding square number puzzle game” with my data to pare the combo C: and system reserve down to under 80GB with some headroom I used Partition Wizard Home Edition to move the boot and system partitions to my new SSD.
From there it was a matter of changing the boot order in BIOS and then running a Windows 7 repair after a failed boot attempt and I was off and running.
Drum roll, please!
Does it boot quicker? Oh yes. But, considering that I only reboot about once a week (I usually have the system sleep during idle time) it’s not a huge improvement for me.
Was 80GB enough? Yes but I’m down to about 18GB free now. My Steam game files all now reside on a spinning disk but honestly I’ve not had disk IO bottlenecks with my games.
So… why did I spend money and time on this? Frankly, I wanted to speed up work I was doing in Excel and Perfmon with customer performance data. It didn’t really help that much, but I’m trying to figure out why that is. Shame on me, but I assumed that the bottleneck was with disk because I’ve got an AMD Phenom II X4 955 running at 3.2GHz with 8GB RAM and I’ve got a CPU and RAM monitor loaded as widgets and check them often when things are going slowly.
Your mileage may vary but overall I’m not getting what I thought I’d get in return for my SSD investment but it was pretty much an impulse buy and gosh I just wanted to be the first kid on my block with a solid state drive.
If you want faster boot time or performance improvement for your laptop, I’d check out the new Seagate Momentus XT drives.
Important Notes
Of course, always back up your data before moving anything.
Move documents using the location tab in the properties of the various user folders (i.e. My Documents, My Pictures, etc).
Steam files can be moved to a safe location, and then copied back after reinstalling Steam (you don't need to redownload your games or content).
Make sure you turn off defrag for any logical drives stored on your new SSD.
Make sure you move or RE-move your page files so you aren't thrashing your SSD. Page files are probably not needed if you have ample RAM anyway.
Intel provides a utility to schedule and run TRIM - make sure you do this to maintain optimal drive write performance. Once a week is recommended.
Important Notes
Of course, always back up your data before moving anything.
Move documents using the location tab in the properties of the various user folders (i.e. My Documents, My Pictures, etc).
Steam files can be moved to a safe location, and then copied back after reinstalling Steam (you don't need to redownload your games or content).
Make sure you turn off defrag for any logical drives stored on your new SSD.
Make sure you move or RE-move your page files so you aren't thrashing your SSD. Page files are probably not needed if you have ample RAM anyway.
Intel provides a utility to schedule and run TRIM - make sure you do this to maintain optimal drive write performance. Once a week is recommended.
Monday, May 17, 2010
FUD Slinging - Why It Is Poison
I made myself a few promises when I jumped the fence from end user to peddler of storage goods. Among those was that if I ever had to compromise my integrity or ethics I'd go find something else to do. This means that I have to believe in what I'm selling and that the product can stand on its own merits. It also means that I am free to be truthful with a prospect and walk away from an opportunity that doesn't make sense.
Happily, during my onboarding and initial training with Compellent these points were firmly established by the management team all the way from the top to my direct leadership. One of the points made, emphatically, was that it's a very bad idea to talk about the competition to your customer.
I heartily agree with this point of view. Based on my experience as a customer sitting through countless sales presentations I can tell you that there are a variety of reasons which make spreading FUD a bad practice and virtually no good ones.
1. When you talk about your competition you're taking time out from selling your solution. Time is golden. Every minute in front of a prospective customer is a chance to listen and learn and help solve their problems. Every minute spent bad mouthing your competition robs you of a chance to sell your value.
2. FUD is typically based on outdated or inaccurate information. I spend a great deal of my free time getting intimately familiar with my product. I do research competitive offerings just so I know how I stack up in a given account. The customer has allowed me in to talk about what I know best - my product.
3. It's annoying. Really. Sometimes customers will ask for competitive info and that's fine. But even then your probably going to offend someone in the room depending on how you approach those particular questions. I always try to keep it positive when asked about the competition - "Vendor X makes a really great product, it works well and they've sold a lot of them. However, this is how we're different and we believe this is a better fit for you."
4. It's potentially dangerous. First, if I spout off about a "weakness" in the competitions product I've just given the customer a reason to invite them back in to answer to my accusations. Bad for me. Secondly, my blabbering on and on about how bad my competition sucks may leave the customer wondering why I protest too much. Finally, if the FUD you spread turns out to be unfounded the customer could then be convinced you don't know your ass from a hole in the ground (and rightly so). To be honest, I really do like it when my competition has been in before me and spread FUD so that I get to spend more time talking about my product and feature set and to erode the customer's confidence in the other guy.
Anyway, that's my take. I'm not going to say I've never spread FUD. It's too tempting and sometimes the stress of the situation leads you to not think rationally and say all sorts of stupid things! But, as a practice in front of customers and in social media I do my level best to keep the conversation above the level of degrading anyone's company or product.
Happily, during my onboarding and initial training with Compellent these points were firmly established by the management team all the way from the top to my direct leadership. One of the points made, emphatically, was that it's a very bad idea to talk about the competition to your customer.
I heartily agree with this point of view. Based on my experience as a customer sitting through countless sales presentations I can tell you that there are a variety of reasons which make spreading FUD a bad practice and virtually no good ones.
1. When you talk about your competition you're taking time out from selling your solution. Time is golden. Every minute in front of a prospective customer is a chance to listen and learn and help solve their problems. Every minute spent bad mouthing your competition robs you of a chance to sell your value.
2. FUD is typically based on outdated or inaccurate information. I spend a great deal of my free time getting intimately familiar with my product. I do research competitive offerings just so I know how I stack up in a given account. The customer has allowed me in to talk about what I know best - my product.
3. It's annoying. Really. Sometimes customers will ask for competitive info and that's fine. But even then your probably going to offend someone in the room depending on how you approach those particular questions. I always try to keep it positive when asked about the competition - "Vendor X makes a really great product, it works well and they've sold a lot of them. However, this is how we're different and we believe this is a better fit for you."
4. It's potentially dangerous. First, if I spout off about a "weakness" in the competitions product I've just given the customer a reason to invite them back in to answer to my accusations. Bad for me. Secondly, my blabbering on and on about how bad my competition sucks may leave the customer wondering why I protest too much. Finally, if the FUD you spread turns out to be unfounded the customer could then be convinced you don't know your ass from a hole in the ground (and rightly so). To be honest, I really do like it when my competition has been in before me and spread FUD so that I get to spend more time talking about my product and feature set and to erode the customer's confidence in the other guy.
Anyway, that's my take. I'm not going to say I've never spread FUD. It's too tempting and sometimes the stress of the situation leads you to not think rationally and say all sorts of stupid things! But, as a practice in front of customers and in social media I do my level best to keep the conversation above the level of degrading anyone's company or product.
Friday, April 9, 2010
Dear Jon - Use Cases for Block-Level Tiering
Yesterday afternoon I happened to pop into Tweetdeck and saw a tweet from Jon Toigo -
I engaged Jon in a quick tweet discussion (I'm the "inbound tweet") until we both had to attend to personal matters but I wanted to come back to this statement because he's brought this up before and I find it a little bothersome. Not because Jon's asking a question or challenging the use case - I'm perfectly fine with that.
My rub is that his premise seems to be that block-level tiering is being positioned as a replacement for data management policy and best practices. That's not the story - at least not the Compellent story. For example, Jon's last tweet on the matter was this:
If anyone's selling array based block-level tiering as a replacement for data management policy, archiving best practices, private information security and the like, I'm not aware. This is a pure storage optimization play. There's nothing about automated block-level tiering that would prevent the development, implementation or enforcement of a good data management policy.
What makes my ears prick up is when a statement like Jon's attempts to paint automated block-level tiering as an evil when it's nothing of the sort. You want to implement an HSM scheme or data management policy on top of ATS? Go right ahead - you'll still have data that is less actively accessed (or practically inactive for that matter) until the data management police take appropriate action is my guess.
On Jon's blog, he quotes an anonymous former EMC employee:
This really sums up the ATS nay saying. First, it's not a delusion to say that data owners aren't incented to participate in data classification. It's an ironclad fact. If you're lucky enough to have published a data retention policy, the exemptions and exclusions start flying almost before the ink is set on paper. Still, I don't believe that ATS is a solution to that problem, but rather a reaction to it.
Secondly, the whole concept that ATS is somehow trying to equate data access to criticality is, in my opinion, fallacious. At some level, yes, access frequency tells us a lot about the data - chiefly that there's a lot of interest in it. It doesn't tell us that it's necessarily important to the business. It may be - it likely is. It may not be. Conversely, infrequently accessed blocks may contain business critical data. Maybe it doesn't and likely it's less critical (now) because it's not being accessed frequently (now).
So ATS gives you a way to store data cost effectively, without impeding the data steward from taking action to classify the data and handle it appropriately. It's not an enemy of data management - nor an ally for that matter. So why is it drawing so much ire from Jon?
Jon, feel free to continue to champion for better data management practices - I'm with you. But please don't waste energy fighting something that adds value today while we're waiting for that battle to be won.
As for the use cases - ask your friendly neighborhood Storage Center user!
I'm interested in exploring the case for on-array tiering. Makes no sense to me. Sounds like tech for lazy people!...
I engaged Jon in a quick tweet discussion (I'm the "inbound tweet") until we both had to attend to personal matters but I wanted to come back to this statement because he's brought this up before and I find it a little bothersome. Not because Jon's asking a question or challenging the use case - I'm perfectly fine with that.
My rub is that his premise seems to be that block-level tiering is being positioned as a replacement for data management policy and best practices. That's not the story - at least not the Compellent story. For example, Jon's last tweet on the matter was this:
Seems like understanding your data and applying appropriate services to it based on its business context is becoming a must have, not a theoretical nice to have.
If anyone's selling array based block-level tiering as a replacement for data management policy, archiving best practices, private information security and the like, I'm not aware. This is a pure storage optimization play. There's nothing about automated block-level tiering that would prevent the development, implementation or enforcement of a good data management policy.
What makes my ears prick up is when a statement like Jon's attempts to paint automated block-level tiering as an evil when it's nothing of the sort. You want to implement an HSM scheme or data management policy on top of ATS? Go right ahead - you'll still have data that is less actively accessed (or practically inactive for that matter) until the data management police take appropriate action is my guess.
On Jon's blog, he quotes an anonymous former EMC employee:
The real dillusion [sic] in the tiered storage approach is that the data owners – the end users – have no incentive to participate in data classification, so the decisions get left to data administrators or worse (software). Just because data is accessed frequently doesn’t mean the access is high priority.
This really sums up the ATS nay saying. First, it's not a delusion to say that data owners aren't incented to participate in data classification. It's an ironclad fact. If you're lucky enough to have published a data retention policy, the exemptions and exclusions start flying almost before the ink is set on paper. Still, I don't believe that ATS is a solution to that problem, but rather a reaction to it.
Secondly, the whole concept that ATS is somehow trying to equate data access to criticality is, in my opinion, fallacious. At some level, yes, access frequency tells us a lot about the data - chiefly that there's a lot of interest in it. It doesn't tell us that it's necessarily important to the business. It may be - it likely is. It may not be. Conversely, infrequently accessed blocks may contain business critical data. Maybe it doesn't and likely it's less critical (now) because it's not being accessed frequently (now).
So ATS gives you a way to store data cost effectively, without impeding the data steward from taking action to classify the data and handle it appropriately. It's not an enemy of data management - nor an ally for that matter. So why is it drawing so much ire from Jon?
Jon, feel free to continue to champion for better data management practices - I'm with you. But please don't waste energy fighting something that adds value today while we're waiting for that battle to be won.
As for the use cases - ask your friendly neighborhood Storage Center user!
Friday, April 2, 2010
Hotel Compellent
Tommy posted a great analogy piece on his blog which explains storage in terms of hotel ownership and occupancy rates versus room cost. Not to take anything away from Tommy, because it was a good example for non-technical audiences, but I want to point out that analogies like statistics in USA Today, can be used to position your point favorably while making a seemingly fair comparison.
Let me illustrate. Let’s use the storage-as-hotel example but make some modifications. Hotel X has an outstanding occupancy rate – or shall we say occupancy capacity but our enterprising new owner quickly finds that he has a problem. Following the advice of his architects he’s built a high end hotel with luxury rooms outfitted with imported artworks, complimentary services like turn down and pillow chocolates and other fancy features. Because of this, he finds that his operational expenses (opex) begin to erode the apparent capital expense (capex) efficiencies he thought he’d realized because of the higher capacity.
On top of that, Hotel X has a highly trained and experienced staff waiting to serve the every whim of the guests from bell service, to concierge, to shoe shine, to someone who will hold the door open for you.
Who wouldn’t love to stay at Hotel X? I would – sounds like a great place!
But, I can’t afford it. Nor can many travelers who simply need a place to sleep, shower and maybe make a few phone calls at the end of the business day. Hotel E might be the perfect place for them. Clean, comfortable and cheap. Not too many services available but you can get a free cinnamon bun in the morning and a complimentary cup of coffee.
But consider the proprietors of Hotel C – let’s call them Phil, Larry and John. They’ve been in the business for many years and they know hotels and more importantly they understand guests. They know that most guests (say 80%) really just need a place to sleep for a night or two and don’t want to pay a lot for stuff they don’t need. The other 20% are high end travelers, VIPs or executives who expect the best and demand all sorts of expensive services – and they have the money to pay for it. So, Phil, Larry and John build a hotel to meet the needs of everyone.
Not only can a guest choose to stay in a room that meets their demands, if those demands should change they can upgrade or downgrade to a more appropriate level of service.
OK, you get the point and I could go on and on with this. I even thought about talking about Hotel E as an example of vendor lock in (“you can check out any time you like, but you can never leave”). But the bottom line is this: Take the time to understand what you’re getting or you’ll go broke staying in places like Hotel X and won’t have money to get back home!
Saturday, March 13, 2010
Risky Business
When I look back over my career in IT, I realize that I’ve been involved in sales long before I decided to join Compellent. Every strategic initiative I pushed for over the years has involved a great deal of salesmanship, evangelization and consensus building. Technologies that are common place today got there, in large part, because someone on the front lines identified a trend, thought about how it could help them drive value for the business and started putting a case together for adopting the new technology in their own data center.
I recall years ago, as an IT administrator for a regional bank, suggesting that we drop our Token-Ring network topology for new branch office rollouts in favor of Ethernet. Seems like a no brainer now but at the time there was a lot of concern about the risk associated with making a directional change like that. The concerns were typical (and justified) of any shop faced with the prospect of doing things in a new way. Will our applications behave differently? Are there hidden costs associated with the change? How will support be impacted? Will this introduce complexity?
No Escape
Change is a difficult and necessary part of IT life, and it carries risk. However, there is no escape from risk because NOT adapting and changing also carries risk. Sometimes changes are forced upon you by other entities (government regulations), circumstances (mergers and acquisitions) or drivers (remember Y2K?) beyond your control.
Managing risks due to change on a tactical level involves policies and procedures to establish controls and checkpoints as well as contingency plans. On a strategic level, I think the best way to reduce risk is through simplicity.
Avoid Complexity
One thing I quickly learned as an IT manager was that complexity is your enemy – always. Complexity, in my opinion, is the offspring of laziness and the cousin of carelessness. The more complex your environment, the more difficult and costly to manage and adapt to changes which means that you have one big ass cloud of risk hanging over your head.
The opposite of complexity in the data center is elegance. A solution that is simple to understand, manage and maintain and is effective at lowering costs and delivering service is an elegant solution. Compellent’s Fluid Data Architecture is one such elegant solution and I know this because every time I do a demo for a prospective customer they light up – they understand how elegant our solution is.
Spare Some Change?
On March 24th from 2-3PM CT you’ll have an opportunity to chat with one of our customers, Ben Higginbotham of WhereToLive.com. I’ll moderate a Twitter chat with Ben on the topic of change and risk in IT. Here are the details. I hope you’ll join and ask questions or share your own success or lessons learned.
I recall years ago, as an IT administrator for a regional bank, suggesting that we drop our Token-Ring network topology for new branch office rollouts in favor of Ethernet. Seems like a no brainer now but at the time there was a lot of concern about the risk associated with making a directional change like that. The concerns were typical (and justified) of any shop faced with the prospect of doing things in a new way. Will our applications behave differently? Are there hidden costs associated with the change? How will support be impacted? Will this introduce complexity?
No Escape
Change is a difficult and necessary part of IT life, and it carries risk. However, there is no escape from risk because NOT adapting and changing also carries risk. Sometimes changes are forced upon you by other entities (government regulations), circumstances (mergers and acquisitions) or drivers (remember Y2K?) beyond your control.
Managing risks due to change on a tactical level involves policies and procedures to establish controls and checkpoints as well as contingency plans. On a strategic level, I think the best way to reduce risk is through simplicity.
Avoid Complexity
One thing I quickly learned as an IT manager was that complexity is your enemy – always. Complexity, in my opinion, is the offspring of laziness and the cousin of carelessness. The more complex your environment, the more difficult and costly to manage and adapt to changes which means that you have one big ass cloud of risk hanging over your head.
The opposite of complexity in the data center is elegance. A solution that is simple to understand, manage and maintain and is effective at lowering costs and delivering service is an elegant solution. Compellent’s Fluid Data Architecture is one such elegant solution and I know this because every time I do a demo for a prospective customer they light up – they understand how elegant our solution is.
Spare Some Change?
On March 24th from 2-3PM CT you’ll have an opportunity to chat with one of our customers, Ben Higginbotham of WhereToLive.com. I’ll moderate a Twitter chat with Ben on the topic of change and risk in IT. Here are the details. I hope you’ll join and ask questions or share your own success or lessons learned.
Subscribe to:
Posts (Atom)