1. This article discusses the importance of interpretability in artificial neural networks and deep learning.
2. It proposes a taxonomy for interpretability, reviews recent studies on improving interpretability, and describes applications of interpretability in medicine.
3. It also discusses possible future research directions of interpretability, such as in relation to fuzzy logic and brain science.
The article is generally trustworthy and reliable, providing an overview of the current state of research on the topic of interpretability in artificial neural networks and deep learning. The article is well-structured and provides a comprehensive review of recent studies on improving interpretability, as well as discussing potential future research directions. The authors provide evidence for their claims by citing relevant publications throughout the article.
The article does not appear to be biased or one-sided, presenting both sides equally without any promotional content or partiality. Possible risks are noted throughout the article, such as the black-box nature of DNNs becoming an obstacle for their wide adoption in mission-critical applications such as medical diagnosis and therapy.
The only potential issue with the article is that it does not explore counterarguments or present any missing points of consideration that could be relevant to this topic. However, overall this is a well-written and comprehensive survey on the topic which can be considered reliable and trustworthy.