Autonomic computing was a phrase coined by IBM in 2001; arguably the frame-works which were defined by IBM as part of this initiative could form much of what is considered Cloud Computing today.
And now 3Par have taken the term Autonomic and applied it to storage tiering. This is really a subset of the Autonomic Computing vision but none the less it is one which has recently gained a lot of mind-share in the Infrastructure world, especially if you were to replace the word Autonomic with the word Automatic; leaving you with Automatic Storage Tiering. But I think autonomic has rather more to it than mere automation; autonomic implies some kind of self management.
- Self Configuring
- Self Healing & Protecting
- Self Optimising
- Basic
- Managed
- Predictive
- Adaptive
- Autonomic
"The basic level represents the starting point where a significant number of IT systems are today. Each element of the system is managed independently by systems administrators who set it up, monitor it, and enhance it as needed.
At the managed level, systems management technologies are used to collect information from disparate systems into one, consolidated view, reducing the time it takes for the administrator to collect and synthesize information.
At the predictive level, new technologies are introduced that provide correlation among several elements of the system. The system itself can begin to recognize patterns, predict the optimal configuration and provide advice on what course of action the administrator should take. As these technologies improve, people will become more comfortable with the advice and predictive power of the system.
The adaptive level is reached when systems can not only provide advice on actions, but can automatically take the right actions based on the information that is available to them on what is happening in the system.
Finally, the full autonomic level would be attained when the system operation is governed by business policies and objectives. Users interact with the system to monitor the business processes, and/or alter the objectives."
I wonder if 3Par are really at level five of the evolutionary process; in fact they actually talk about Adaptive Optimisation as well as Autonomic Storage Tiering; a sub-conscious admission that they are not quite there yet?
But Autonomic Computing Infrastructures is something that all vendors and customers should be aspiring to though. Of course, there is the long term issue of how we get the whole infrastructure to manage itself as an autonomic entity and how we do this within an heterogeneous environment is surely a challenge. Still, surely it is the hard things which are worth doing?
Very good post. It turns out that the Xiotech ISE has two of the three autonomic characteristics; self-healing (self-regulation) and self-optimising. For the latter, for example, it recognizes access patterns and adjusts cache behavior to suit. The only thing it is not is self-configuring…but that also may change with the new SNIA CDMI RESTful interface. The potential is there to interpret the (cloud) environment and self-configure. Again, good post. It turns out the self-healing part is the toughest, requiring extremely skillful engineering…many patents involved as well from the likes of Lary, Lubbers, Sicola, et al.
Hi Martin,
I wasn’t at 3PAR when autonomic naming was being done, but I know understand that it was deliberate and that it did not originate (consciously) from IBM’s definitions.
David Scott, our CEO tells me the use of autonomic comes from parallels with the the Autonomic Nervous System in humans. The ANS receives stimulus from various systems (nervous and chemical primarily) and adjusts the body’s mechanisms in response, for instance heart rate, breathing rate etc. in order to achieve some objective, such as escaping the large cat that is running after you.
Our approach to storage was similar. While we like to talk about our thin provisioning and wide striping features, the underlying technology that makes it all work is an extremely fine-grained internal instrumentation system that monitors and reports on all the physical and logical entities within. This collected data is the basis for actions that are taken by the system in response to events occurring in it. These include various failure modes, responding to workload changes, provisioning storage as existing storage is consumed, relocating storage resources transparently and associating groups of clustered servers with shared storage within the array.
A combination of user-controlled policies and internal algorithms guide the actions that the system takes. The design goal is for the administrator to monitor the actions that the system takes. FWIW, we think this lines up very well with IBM’s definition, so it’s OK with us if that’s the bar you want to use.
IBM’s ‘Autonomic Computing’ initiative was quite a strong meme when they launched it; I suspect it might have subconsciously influenced David and the team.
It is also important to note that ANS not only reacts but it also prepares to react. So if you see a big cat; your ANS may be preparing you to run-away but you don’t runaway until absolutely necessary.
How this manifests itself in a storage array is one of the interesting challenges. For example, the end of quarter billing run; can the storage array start moving the data onto faster disk prior to it being required. Because if it has to do so when the billing run kicks in, it will probably be too late.
Obviously this is a user-based policy but it requires subtlety to ensure that the complex doesn’t become overly complicated to manage.
The way this works with 3PAR’s AO is that volumes can be scheduled for tiering activity. End of cycle applications can have their volumes configured to be eligible for tiering at those times and it would probably make sense to put them on a performance QoS gradient to accelerate them relatively faster than other applications that are using the tiering feature.
The big cat analogy is interesting insofar as autonomic provisioning in a 3PAR array has a pre-allocation stage that reserves capacity as certain usage thresholds are exceeded in the system, but the actual incremental provisioning to a particular storage volume is done when the write occurs that needs the additional capacity.
An observation of mine about IBM. They have very talented people and some of them are certainly thought leaders who invent wonderful abstractions and models for computing, such as SNA, SAA and Autonomous Computing. Unfortunately for IBM, they have problems following through on the great ideas that come from their research centers and putting them in products. The process is very slow. My guess is that IBM may still have some directions regarding autonomous computing within their product development groups, but that these requirements take a back seat to other competition-driven requirements.
I am not just saying it, but 3PAR takes the whole autonomous storage concept very seriously and I think we demonstrate it very well with our product implementations. We have succeeded in an industry of heavyweights because of this focus on offloading mundane, repetitive, error prone tasks from administrators.