Select Page

While machine learning has been around for decades, it appears that recently, particularly in the last year, much of the news surrounding it had involved interpretability with ideas such as trust, the ML black box, fairness, or ethics. Now, if the topic is starting to grow in popularity, that means it is becoming more important. With machine learning becoming more advanced and complex, the outputs become more difficult for humans to explain. Therefore, interpretable machine learning is the idea that humans, at some level, should understand the decisions made by algorithms. So, here are some reasons why machine learning interpretability matters.

 

People Care

One of the biggest reasons this has become mainstream is due to the fact that the algorithms are making decisions affecting people’s lives. At the end of the day, people will not care whether it was a person or a computer, they simply want to know why. So, by extension, people are caring more about ML interpretability. As an example, one of the most promising use cases for Artificial Intelligence in the healthcare industry is for clinical trial participation selection. Those who are or aren’t selected want to know why.

 

Governments Are Beginning To Care

The European Union’s General Data Protection Regulation was one of the first regulations to come in this domain, and while it mainly focused on privacy, it also touched on the topic of interpretability. Along with impacting the people, these regulations are another sign that model interpretability matters more in today’s age. Plus, for anyone outside of the EU who feels this doesn’t apply to them, think again as many states in the US have introduced similar data protection laws which will continue into the new year and beyond.

 

Data Scientists Should Care

Customers and government regulations should be enough to make those who work on the development of ML models and systems care about ML interpretability. However, there are some skeptics within the industry. Everyone should care about ML interpretability because it builds better models, and understanding and trusting these models and their results is a hallmark of overall good science. Plus, additional analysis to help understand the decisions is another check to make sure that the models are sound and performing as they should.