This blog was first published on Service Management 360 on 31-Jul-2013.
In my previous post, Monitoring as a Service: Part 2, the business model, I highlighted the need for a well-defined service catalog including deliverables and work items to be prepared.
After I convinced my wife about the value of monitoring (see part 1) and the delivery model for a Monitoring as a Service business, I wanted to and think about the implementation.
There is a general question we must consider at the beginning. Do we have one monitoring environment for all customers, or should each customer have its own environment? Gaston Hernan Concilio discussed this question in detail in his blog, “To share or not to share: A monitoring dilemma.”
Beside this discussion, it is essential to pick the right product to cover all of the following aspects:
The monitoring infrastructure should support lightweight as well as enterprise-class monitoring. It is essential that the monitoring infrastructure supports the management of the monitoring software (agentless as well as agent-based), the distribution of the monitoring components and the setup of the environment. A command-line interface is required to set up multiple instances.
The monitoring rule definitions have to support simple, single-attribute comparisons and multiple-attribute verifications as well as complex, multistage, rule-dependent decisions. These rules should be stored in a central repository, to be distributed from a central point.
Historical data collection
A wide range of historical data of system performance and availability should be gathered and stored in a central place for later analysis and reporting.
This is the central service customers are expecting in a monitoring service, and it is essential that the monitoring tool offers a wide range of reports, including historical availability reviews, activity reports and capacity projections for the future.
While multiple customers should share the same service it is essential that the product setup supports a clear, unbeatable separation of the customer data and minimizes the influence from one customer situation another.
With all that in mind, I reviewed the capabilities of IBM Tivoli Monitoring, finding that it fulfilled all my requirements except the multitenancy requirement.
The product supports this partially, but several manual steps have to be taken to separate customer environments from each other. Additionally, the licensing agreements stopped me from going for a single monitoring infrastructure for all customers. The IBM SmartCloud Application Performance Management (SCAPM) Entry Edition offering limits the infrastructure to a single Tivoli Enterprise Monitoring Server (TEMS) and does not allow any remote TEMS implementation. This leads to the following architectural approach:
Each customer gets its own IBM Tivoli Monitoring (ITM) Infrastructure (TEMS; Tivoli Enterprise Portal Server, TEPS; Tivoli Data Warehouse, TDWH) and reporting engine (Tivoli Common Reporting, TCR) together on a single OS image. On request, the service provider provides this image, including the required server hardware (if requested). Each image is generated from an installation script (IBM provides VMware images for SCAPM Entry) and has an agent depot containing all licensed agents. The agent depot is to distribute the agents to customers’ systems.
To gain control across multiple customer instances, we add IBM Netcool OMNIbus as an event consolidation engine. This enables the consolidated view across all connected customer environments. Each single customer might view its own data in its own monitoring environment and might run ad-hoc reports. The access to the OMNIbus implementation is for the service provider only.
This implementation also offers the ability to provide a different kind of service quality to a single customer (Gold – Silver – Standard). Based on this architecture, additional Information Technology Infrastructure Library (ITIL) compliant processes may be introduced as further offerings to the Managed Service Provider’s (MSP) customers.
Is this environment manageable?
This setup is manageable if we have high grade of automation for the installation, implementation and maintenance. All tasks have to be designed very well, which is a great job for a system programmer. Or ask your IBM Software Services team for help.
Is this solution good for all business sizes?
For very small customers (with a small number of systems) a dedicated monitoring service seems to be a little bit oversized. Depending on the customer’s expectations, a shared environment with lightweight monitoring might be good enough.
I do not expect my dentist to watch the health of its computer systems. The dentist should take care of me. But I expect my dentist’s computer systems to be up and running and to support its service to me.
So it might be useful to set up a shared ITM environment, without customer access, inside the MSP domain and connect these customer systems to that environment.
As Gaston mentioned, it is about “the best for you,” and I’d like to add: “and for your customers.”
In the fourth part of my blog series, I will shed some light on the procedures we need to put in place to run this Monitoring as a Service business successfully. What do you think? Please share your comments below.