Hello guys, how are you all doing?
In the scenario of model serving, as a member of AI application developing team,
what features would you expect a platform to provide for assessing the model's accuracy, effectiveness, and operational performance?
A few that I would expect from such a platform:
1. Measurement and comparison of how well is the model actually performing on real data
2. Performance over time as and when data changes (drift)
3. Speed and resource usage
4. Monitoring and reporting dashboards, system alerts
5. Latency monitoring (average, percentiles)
6. Bias and fairness assessment
Also, to mention API is quite a common ask when you come up with a platform like this and you have users using it. People like control and scalability over using those metrics from such a system.