Learning Human-like Driving Policies from Real Interactive Driving Scenes
Abstract
Traffic simulation has gained a lot of interest for autonomous driving companies for qualitative safety evaluation of self driving vehicles. In order to improve self driving systems from synthetic simulated experiences, traffic agents need to adapt to various situations while behaving as a human driver would do. However, simulating realistic traffic agents is still challenging because human driving style cannot easily be encoded in a driving policy. Adversarial Imitation learning (AIL) already proved that realistic driving policies could be learnt from demonstration but mainly on highways (NGSIM Dataset). Nevertheless, traffic interactions are very restricted on straight lanes and practical use cases of traffic simulation requires driving agents that can handle more various road topologies like roundabouts, complex intersections or merging. In this work, we analyse how to learn realistic driving policies on real and highly interactive driving scenes of Interaction Dataset based on AI L algorithms. We introduce a new driving policy architecture built upon the Lanelet2 map format which combines a path planner and an action space in curvilinear coordinates to reduce exploration complexity during learning. We leverage benefits of reward engineering and variational information bottleneck to propose an algorithm that outperforms all AIL baselines. We show that our learning agent is not only able to imitate humane like drivers but can also adapts safely to situations unseen during training.