Download PDFOpen PDF in browser

Sim-to-Real Reinforcement Learning Applied to End-to-End Vehicle Control

EasyChair Preprint no. 4236

6 pagesDate: September 22, 2020

Abstract

In this work, we study vision-based end-to-end reinforcement learning on vehicle control problems, such as lane following and collision avoidance. Our controller policy is able to control a small-scale robot to follow the right-hand lane of a real two-lane road, while its training was solely carried out in a simulation. Our model, realized by a simple, convolutional network, only relies on images of a forward-facing monocular camera and generates continuous actions that directly control the vehicle. To train this policy we used Proximal Policy Optimization, and to achieve the generalization capability required for real performance we used domain randomization. We carried out thorough analysis of the trained policy, by measuring  multiple performance metrics and comparing these to baselines that rely on other methods. To assess the quality of the simulation-to-reality transfer learning process and the performance of the controller in the real world, we measured simple metrics on a real track and compared these with results from a matching simulation. Further analysis was carried out by visualizing salient object maps.

Keyphrases: Artificial Intelligence, autonomous vehicles, deep learning, Duckietown, machine learning, mobile robot, Reinforcement Learning, sim-to-real, Transfer Learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:4236,
  author = {András Kalapos and Csaba Gór and Róbert Moni and István Harmati},
  title = {Sim-to-Real Reinforcement Learning Applied to End-to-End Vehicle Control},
  howpublished = {EasyChair Preprint no. 4236},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser