Logs and Metrics go In, Incidents and Root Cause Come Out
Install our Fluentd log and Prometheus metrics collectors (takes less than 2 minutes). No parsers, code changes, rules or config are needed. Then let our Machine Learning (ML) take over!
Within minutes, the ML learns the structures of your logs, and categorizes each event into a “dictionary” of unique event types. Categorization is crucial for accurate learning of the patterns in your logs and metrics.
Within the first hour, the patterns of every log event and metric are learnt (and the learning continues to improve as more data is seen).
When log or metric patterns change (e.g. change in periodicity or frequency, new/rare message starts, etc.), our ML detects these as anomalies - but this is not enough. In order to separate signal from noise, it then looks for hotspots of abnormally correlated anomalies across both metrics and logs.
If you use an Incident Management tool like PagerDuty or Slack, or an existing log management or monitoring tool, Zebrium can augment any incident with a characterization of root cause.
A signal is sent to Zebrium when an incident occurs. Zebrium then finds any ML-incidents or sets of anomalous log/metric patterns that coincide with the signal, and automatically feeds the information back to your incident management tool.
Read more here: You've Nailed Incident detection, what about Incident Resolution.
The hotspots detected in the steps above are packaged into human readable incidents. Incidents make it easy for a user to clearly see the correlated set of anomalous log events and/or metrics.
Incident alerts are sent via Slack, email or webhook.
The entire process is completely autonomous - without requiring manual configuration, user-defined thresholds or alert rules.