Traveling To The Cloud – Predict

As I stated in my previous post, traditional monitoring approaches focusing on named systems do no longer make sense. In an agile cloud environment the system name does not matter, so in turn the performance values from this system don’t.

In such a situation the prediction approach also has to change. The data flowing into IBM Operations Analytics – Predictive Insights should no longer identify a single system nor a single instance of a resource. It should represent the sum of resources or the average usage value. So let us review a few simple examples:

While we monitor the key performance metrics of the system instance with our monitoring agents like

  • Disk I/O per second

  • Memory Usage in Megabytes

  • Network Packages sent and received

  • CPU percentage used

we feed the following values into our prediction tool:

  • SUM(Disk I/O per second) across all used OS images

  • SUM(Memory Usage in Megabytes) across all used OS images

  • SUM(Network Packages sent and received) across all used OS images

  • AVG(CPU percentage used) across all used OS images

IBM Monitoring stores historical data in the Tivoli Data Warehouse. A traditional system setup might directly leverage the data stored in the data warehouse to feed the prediction tool. With the elastic cloud approach we should add some new views to the database, which enable the required summarized data view as described above.

To ensure that a single operating system instance isn’t overloaded a traditional resource monitoring has to be deployed to each cloud participant. Distribution lists from IBM Monitoring will help to do this automatically.

These list of systems are also important to maintain the efficiency of the view’s introduced for the prediction.

The following table is required in the WAREHOUS database:

This table represents the distribution list known from IBM Monitoring.

Based on this table we can create views like the one below: With this new view we are now able to feed data regarding the disk usage into the IBM Operations Analytics – Predictive Insights tool.

The column “CloudName” is useful to identify records for streams. The “TimeFrame” column works as time dimension.

Five streams are the result from the table above:

  • AllReadRequestPerSecond

  • AllWriteRequestPerSecond

  • AvgWaitTimeSec

  • AllReadBytesPerSec

  • AllWriteBytesPerSec

All streams are generate for each single instance “CloudName”.

 In the Predictive Insights Modeling Tool the view is selectable (as a table), so that the generation of the data model is pretty straight forward.

 The SQL line

makes sure that TimeFrame is a Candle Time Stamp which is known to IBM Operations Analytics – Predictive Insights tool.

This sample shows how a data model for the cloud might look like.

With moving more and more systems to the cloud and becoming more and more agile while serving the IT workload, the monitoring approach has to become more agile as well. Also the point of view which key performance metrics matter have to change. But as you can see, the data is there, we only have to change the perspective a little bit.

So what is your approach? What requirements do you see arising while moving your monitoring and prediction tools to the cloud?

Follow me on Twitter @DetlefWolf, or drop me a discussion point below to continue the conversation.

In my next blog I will share a few ideas how to automate the implementation of IT monitoring in a cloud environment.