Use intermediate hosts to process multi-source or complex data
Imagine you need a monitor to react to a derivative of several performance values. For example, you could need to only trigger alert if CPU load and free memory have both crossed certain thresholds.
If those monitors are related to the same host, you can always use generic monitor type, such as Script or Program, Python script etc. and do whatever math is required. What should you do if the performance values can only be taken from different hosts?
There are several solutions.
Get the values directly from monitoring database
The first approach is straightforward. Create corresponding IPHost monitors; use a script to extract whatever data they have, process the data and use the result as new performance value.
The advantage of this approach is no additional source, service or host is required to do the processing.
- you keep extra (source) monitors; if you are using any license different from Enterprise, that reduces the amount of monitors you can have
- concurrent access to monitoring database can be tricky; if the monitoring setup is rather busy, there can be considerable delays in getting data; also, data may be out of sync with one another
Use third-party services
There are multiple possible services providing possibility to aggregate and preprocess data: SaaS (such as Datadog, New Relic etc) and self-hosted (time series databases, e.g. Graphite). The source data can be first channeled to that service and later extracted by IPHost (again, using custom or HTTP(S) monitor type).
Advantage: you do not need to spend computing resources to get all the processing.
- you add a new variable into your monitoring setup; the third-party service or hosted one may be inaccessible, under high load etc. – the data may as well be unavailable in time
- you need to spend time and/or money on the third party service, to have it configured and worked properly
Offload the processing to virtual machine
This approach resembles, to some extent, the previous one. However, you do “quick and dirty” processing, using virtual machines (VMs) or Windows Subsystem for Linux to do the work.
Advantage is relatively low resources amount required to do such processing: most simple performance values can be obtained from Linux-type environment, using a variety of existing pre-installed tools.
- you still need to install and configure the VM or WSL instance
- you would need to write the “wrapper” scripts that get the required data and process them as required
One of advantages of WSL is its possibility to share data directly with its host (i.e. the system where IPHost is running). This way, it can save time and resources of IPHost, offering already processed, easy to read performance data.
What is your preferred way to proxy the performance data when you need to do complex processing and/or aggregating them?
If you need assistance implementing any of the proxying approach mentioned above, or if you have any idea to offer, feel free to contact us or just leave a comment below.