1. The notion of explainable artificial intelligence has seen a resurgence in recent years due to ethical concerns and lack of trust from users.
2. There are two approaches to increasing trust and transparency of intelligent agents: interpretability and explanation.
3. This article surveys over 250 publications on explanation from social science venues, presenting relevant theories and evidence that can be used to inform explainable AI.
The article is generally trustworthy and reliable, as it provides an overview of the current state of research on explainable AI, citing relevant sources from social science venues. The author also presents ideas on how this work can be infused into explainable AI, providing potential solutions for increasing trust in AI applications.
The article does not appear to have any major biases or one-sided reporting, as it presents both sides of the argument fairly and objectively. It does not make any unsupported claims or missing points of consideration, as all claims are backed up by evidence from relevant sources. Additionally, the article does not contain any promotional content or partiality towards one side of the argument over another.
The only potential issue with the article is that it does not explore counterarguments or present both sides equally; however, this is understandable given its focus on presenting theories and evidence from social science venues rather than exploring opposing views in detail. Furthermore, possible risks associated with using explainable AI are noted throughout the article, so this should not be a major concern for readers.