In the near future when Artificial Intelligence (Intelligence displayed by machines) will start to spread out and autonomously make important decisions that will impact our daily lives, the issue of its accountability under the law will raise.
AI systems are expected to justify their decisions without revealing all their internal secrets, to protect the commercial advantage of the AI providers. Not to mention that map inputs and intermediate representations in AI systems to human-interpretable concepts is a challenging problem because these systems tend to work as black boxes.
As such explanation systems should be considered distinct from AI systems.
This paper written by researchers of the Harvard University highlights some interesting aspects of this debate and shows that this problem is by no mean straightforward.