The Virtues of Visibility in Virtualization
Virtualization remains a hot topic with lots of the people we talk to. The opportunity to save money by improving utilization levels, to consolidate applications onto fewer servers and to provide a more agile and responsive IT service remain compelling drivers.
Plenty of challenges remain. Network World recently reported on a survey that listed many of the concerns that people have. The focus of the report was on the difficulty of troubleshooting problems – particularly identifying the root cause of a problem – something that virtualization makes more complex for a variety of reasons.
Why does virtualization present a challenge? Here are two reasons:
First, because it is yet another layer of software in the stack whose behaviour must be understood. Just as with the other layers, it is potentially complex in its own right, may need patching and can benefit from tuning. Expertise in these areas is not yet universal.
Second, because virtualization increases the rate of change in the data centre. While once upon a time, new servers could only be brought on line as fast as the data centre teams could rack them up and the network teams could connect them to the network (all assuming that there was sufficient space, power, cooling and network connectivity in the data centre), new virtual machines can pop in and out of existence at the press of a button.
This increasing rate of change is a challenge because it means that the head knowledge that the operations staff rely on in responding to incidents and problem solving rapidly goes out of date. An application that might have depended on one set of servers last week, or yesterday, or just an hour ago, may now be sitting somewhere else. If the organisation’s CMDB is not up to date, perhaps because it relies on imperfect manual control processes to keep it up to date, the first task of a support team may be to try and figure out exactly where an application is running before they can embark on any deeper investigation.
One way to help address these challenges is to have better intelligence – more complete and more up to date – on what is out there. What virtual machines do I have running today? Which physical machines are they running on? Which business applications do they support?
This kind of information is invaluable, whether to people trying to diagnose problems – the scenario that the Network World blog highlights – or to those responsible for the kind of architectural governance which should avoid many problems cropping up in the first place.
There are other ways in which organisations gain from having an accurate view of what they have. Better visibility into the virtualized environment also helps control the otherwise prohibitive software licence costs that can come with the burgeoning numbers of virtual servers in the data centre.
In all these cases, to be useful, the information has to be accurate and up to date. We think that discovery tools like Tideway Foundation are a great way to collect this information – automatically – and keep it up to date.