I received the following question:

If we understand correctly, the blog entry and the microdelta_example.cfg explain that the parameter –delta=10 [–tolerance=5] overrides the interval (e.g 3 mins) set by the getter by collecting the data twice during the 3 minutes?!

That is not entirely true. The switch --delta determines together with --tolerance which datasets (=”Calls”) are to be selected from the Store for the check, or rather which delta time has to exist between the datasets. Should no corresponding datasets for this delta be found, the check returns UNKNOWN.

The getter can be used to define how the calls get to the Store during the interval that matches the delta. One way to do this is to call it enough times (specified by the interval in the Nagios configuration). For a 10 second delta, this would mean a very large number of calls and would result in a significant usage increase.

Another, new method is to set the variable of microdeltas to 10, as described in this blog entry. Then, the getter collects the data twice for each call, with a delta of 10 seconds in between. Example: If we set a 3 minute interval for the getter and a MICRODELTA of 10 seconds, the store will soon have calls with the following timestamps:

Call0: 10:00:00
Call1: 10:00:10
Call2: 10:03:00
Call3: 10:03:10 ...

What is the advantage in comparison to calling the getter every 10 seconds? The disadvantage is a significant usage increase.

Exactly, and the advantage is that there won’t be any usage increase.

How can we verify that the changes are working? In the store files or in the RRD?

In the store files that can be viewed with --explore=calls.