Running ML in production requires the ability to automate training, inference, and forecast operations. Regular scheduled operations, and on demand inference are provided via the Loud ML daemon process.
Jobs can be scheduled at a regular
interval defined in JSON model settings, using the
_start API eg to automate outlier detection for live streaming data:
systemctl start loudmld curl -X POST localhost:8077/models/nab_cpu_utilization_asg_misconfiguration_mean_value__5m/_start?detect_anomalies=true&save_prediction=true
Congratulations on making it this far. We hope this tutorial helps get you started on your Loud ML journey. Feel free to contribute and submit ideas, bug fixes, and pull requests to enhance the OSS version and the documentation.
Twitter channel: @loud_ml