Download PDFOpen PDF in browserCurrent version

Interpretable Model-based Hierarchical Reinforcement Learning Using Inductive Logic Programming

EasyChair Preprint no. 5668, version 1

Versions: 12history
12 pagesDate: June 3, 2021

Abstract

Recently deep reinforcement learning has achieved many success in wide range of applications, but it notoriously lacks data-efficiency and interpretability. Data-efficiency is important as interacting with the environment is expensive. Interpretability can increase the transparency of the black-box-style deep RL model and gain trust from the users of RL systems. In this work, we propose a new hierarchical framework of symbolic RL, leveraging a symbolic transition model to improve the data-efficiency and introduce the interpretability of learned policy. This framework consists of a high-level agent, a subtask solver and a symbolic transition model. Without assuming any prior knowledge on the state transition, we adopt inductive logic programming (ILP) to learn the rules of symbolic state transitions, introducing interpretability and making the learned behavior understandable to users. In empirical experiments, we confirmed that the data-efficiency of the proposed framework over previous methods can be improved by 30%~40%.

Keyphrases: hierarchical learning, Inductive Logic Programming, planning, Reinforcement Learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:5668,
  author = {Duo Xu and Faramarz Fekri},
  title = {Interpretable Model-based Hierarchical Reinforcement Learning Using Inductive Logic Programming},
  howpublished = {EasyChair Preprint no. 5668},

  year = {EasyChair, 2021}}
Download PDFOpen PDF in browserCurrent version