In a previous post, I talked about using TDD for Machine Learning applications. It was a very basic mind-dump and I am happy that Dato Blog is running a blog post series on evaluating and testing Machine Learning Models.
Here’s a brief summary with links to the original blog posts
- Orientation Alice starts her blog series with an introduction to the jargon used in evaluation of Machine Learning Models. She talks about the three most important considerations when evaluating Machine learning models - picking the metric, picking the dataset on which to measure the metric and meta-evaluation of models.
- Classsification Metrics Alice talks about common metrics used when evaluating Classification based Machine learning Models.
- Ranking and Regression Metrics Alice talks about common metrics used when evaluating regression or ranking based machine learning models.
- Hyperparameter Tuning Hyperparamters are values that need to supplied to the training procedure. For example, the regularization parameter values used in some regression models.
- Validation and offline Testing She throws light on Boostrapping, Jack-knifing resampling techniques for Validation Testing.
- Online A/B Testing She throws light on the basics of A/B testing and its pitfalls.