Argumentation as a Framework for Interactive Explanations for Recommendations
Keywords
- Applications that combine KR with other areas-General
- Explanation finding, diagnosis, causal reasoning, abduction-General
- Argumentation-General
Abstract
As AI systems become ever more intertwined in our personal lives, the way in which they explain themselves to and interact with humans is an increasingly critical research area. The explanation of recommendations is thus a pivotal functionality in a user’s experience of a recommender system (RS), providing the possibility of enhancing many of its desirable features in addition to its effectiveness (accuracy wrt users’ preferences). For an RS that we prove empirically is effective, we show how argumentative abstractions underpinning recommendations can provide the structural scaffolding for (different types of) interactive explanations (IEs), i.e. explanations supporting interactions with users. We prove formally that these IEs empower feedback mechanisms that guarantee that recommendations will improve with time, hence rendering the RS scrutable. Finally, we prove experimentally that the various forms of IE (tabular, textual and conversational) induce trust in the recommendations and provide a high degree of transparency in the RS’s functionality.