Evaluateedit

We can use the eval-model command to compare original data against model predictions, using historical data. The -s flag saves data to the default bucket, and facilitates data vizualization. The -a flag calculates a score to detect anomalies.

Output data points are saved to the output bucket and each point contains the following information:

  • @mean_value: the original data point value or null
  • mean_value: the predicted normal value named according to the JSON model definition
  • lower_mean_value: the minimum normal value with 99.7 percent confidence
  • upper_mean_value: the maximum normal value with 99.7 percent confidence
  • score: anomaly score in range [0.0, 100.0]
  • is_anomaly: flag the data point as abnormal or not
loudml -e "eval-model cpu_utilization_asg_misconfiguration -f now-30d -t now -s -a -o output"

Or if you need to print the output to the terminal:

loudml -e "eval-model cpu_utilization_asg_misconfiguration -f now-30d -t now -a"

The value between brackets eg [ 40.9] is the anomaly score for this data point. This score ranges from 0 to 100. A star * indicates that the score is above normal and the data point is therefore flagged as abnormal.

timestamp          @mean_value         loudml.mean_value
1567460400.0       35.168             38.212 [ 40.9]
1567460700.0       46.158             38.709 [ 78.1]
1567461000.0       39.946             43.764 [ 37.6]
1567461300.0       33.316             35.549 [ 32.3]
1567461600.0       57.832             32.826 [* 100.0]
1567461900.0       31.222             34.417 [ 68.5]
1567462200.0       33.588             30.978 [ 65.9]
1567462500.0       30.496             33.961 [ 72.5]
1567462800.0       39.501             34.369 [ 98.0]
1567463100.0       46.334             33.238 [* 100.0]
1567463400.0       31.154             33.193 [ 44.2]
1567463700.0       31.165             32.515 [ 26.1]