This post covers content from the “Interpretability and Explainability” lecture of our Machine Learning in Production course. For other chapters see the table of content.

Machine-learned models are often opaque and make decisions that we do not understand. As discussed, we use machine learning precisely when we do not know how to solve a problem with fixed rules and rather try to learn from data instead; there are many examples of systems that seem to work and outperform humans, even though we have no idea of how they work. So, how can we trust models that we do not understand…

Christian Kästner

associate professor @ Carnegie Mellon; software engineering, configurations, open source, SE4AI, juggling

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store