Running ML in production requires the ability to automate training, inference, and forecast operations. Regular scheduled operations, and on demand inference are provided via the Loud ML model server and the scheduled_job API.
Jobs can be scheduled at a regular
interval. Using the
start-model command will schedule a regular job fetching new streaming data and tagging abnormal data points:
loudml -e "start-model cpu_utilization_asg_misconfiguration -a"
Scheduled jobs can be configured to perform other operations too. Refer to the documentation for more advanced usage and examples.
Congratulations on making it this far. We hope this tutorial helps get you started on your Loud ML journey. Feel free to contribute and submit ideas, bug fixes, and pull requests to enhance the OSS version and the documentation.
Twitter channel: @loud_ml