
Read full paper
Details
Joint research with Volkswagen, Google Research, University of Warsaw, Polish Academy of Sciences, Jagiellonian University; presented at NeurIPS 2020 workshop
- The research paper introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic.
- The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems.
- The work presents how to obtain competitive policies and evaluate experimentally how observation types and reward schemes affect the training process and the resulting agent’s behavior.
Authors: Błażej Osiński, Piotr Miłoś, Adam Jakubowski, Paweł Zięcina, Michał Martyniak, Christopher Galias, Antonia Breuer, Silviu Homoceanu, Henryk Michalewski
Abstract
This work introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic. We concentrate on tactical tasks lasting several seconds, which are especially challenging for current control methods. The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems. To this end, we open-source the code under a permissive license and present a set of baseline policies. CRTS combines the realism of traffic scenarios and the flexibility of simulation. We use it to train agents using a reinforcement learning algorithm. We show how to obtain competitive policies and evaluate experimentally how observation types and reward schemes affect the training process and the resulting agent’s behavior.
References
- A. Tampuu, T. Matiisen, M. Semikin, D. Fishman, and N. Muhammad, “A survey of end-to-end driving: Architectures and training methods,” IEEE Transactions on Neural Networks and Learning Systems, 2020.
- B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. Al Sallab, S. Yogamani, and P. Pérez, “Deep reinforcement learning for autonomous driving: A survey,” IEEE Transactions on Intelligent Transportation Systems, 2021.
- A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in CoRL 2017. PMLR.
- S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field and service robotics. Springer, 2018, pp. 621–635.
- B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner, “Torcs, the open racing car simulator,” 2000. [Online]. Available: http://torcs.sourceforge.net
- N. Jakobi, P. Husbands, and I. Harvey, “Noise and the reality gap: The use of simulation in evolutionary robotics,” in European Conference on Artificial Life. Springer, 1995, pp. 704–720.
- J. Halkias and J. Colyar, “NGSIM interstate 80 freeway dataset,” US Federal Highway Administration, Washington, DC, USA, 2006.
- A. Breuer, J.-A. Termöhlen, S. Homoceanu, and T. Fingscheidt, “openDD: A large-scale roundabout drone dataset,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020, pp. 1–6.
- K. Cobbe, O. Klimov, C. Hesse, T. Kim, and J. Schulman, “Quantifying generalization in reinforcement learning,” in International Conference on Machine Learning, 2019.
- A. Zhang, N. Ballas, and J. Pineau, “A dissection of overfitting and generalization in continuous reinforcement learning,” arXiv:1806.07937, 2018.
- D. Sadigh, S. Sastry, S. A. Seshia, and A. D. Dragan, “Planning for autonomous cars that leverage effects on human actions,” in Proceedings of Robotics: Science and Systems, 2016.
- E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, “A survey of autonomous driving: Common practices and emerging technologies,” IEEE Access, 2020.
- D. Bevly, X. Cao, M. Gordon, G. Ozbilgin, D. Kari, B. Nelson, J. Woodruff, M. Barth, C. Murray, A. Kurt, K. Redmill, and Ümit Özgüner, “Lane change and merge maneuvers for connected and automated vehicles: A survey,” IEEE Transactions on Intelligent Vehicles, 2016.
- P. Wang, C. Chan, and A. de La Fortelle, “A reinforcement learning based approach for automated lane change maneuvers,” in IV 2018. IEEE.
- C. Chen, J. Qian, H. Yao, J. Luo, H. Zhang, and W. Liu, “Towards comprehensive maneuver decisions for lane change using reinforcement learning,” 2018.
- S. Ullman, “Against direct perception,” Behavioral and Brain Sciences, vol. 3, no. 3, p. 373–381, 1980.
- A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” International Journal of Robotics Research (IJRR), 2013.
- M. Bansal, A. Krizhevsky, and A. Ogale, “Chauffeurnet: Learning to drive by imitating the best and synthesizing the worsts,” in Proceedings of Robotics: Science and Systems XV, 2019.
- D. Pomerleau, “ALVINN: an autonomous land vehicle in a neural network,” in Advances in Neural Information Processing Systems 1. Morgan Kaufmann, 1988, pp. 305–313.
- M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars,” arXiv:1604.07316, 2016.
- A. Bewley, J. Rigley, Y. Liu, J. Hawke, R. Shen, V. Lam, and A. Kendall, “Learning to drive from simulation without real world labels,” arXiv:1812.03823.
- F. Codevilla, M. Muller, A. López, V. Koltun, and A. Dosovitskiy, “End-to-end driving via conditional imitation learning,” in ICRA 2018. IEEE.
- B. Osinski, A. Jakubowski, P. Ziecina, P. Miłos, C. Galias, S. Homoceanu, and H. Michalewski, “Simulation-based reinforcement learning for real-world autonomous driving,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 6411–6418.
- C. Chen, A. Seff, A. L. Kornhauser, and J. Xiao, “Deepdriving: Learning affordance for direct perception in autonomous driving,” in ICCV 2015. IEEE Computer Society, 2015.
- A. Sauer, N. Savinov, and A. Geiger, “Conditional affordance learning for driving in urban environments,” in Conference on Robot Learning. PMLR, 2018, pp. 237–252.
- R. Krajewski, J. Bock, L. Kloeker, and L. Eckstein, “The highD dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems,” in ITSC 2018.
- J. Bock, R. Krajewski, T. Moers, S. Runde, L. Vater, and L. Eckstein, “The inD dataset: A drone dataset of naturalistic road user trajectories at german intersections,” arXiv:1911.07602, 2019.
- J. Geyer, Y. Kassahun, M. Mahmudi, X. Ricou, R. Durgesh, A. S. Chung, L. Hauswald, V. H. Pham, M. Mühlegg, S. Dorn, T. Fernandez, M. Jänicke, S. Mirashi, C. Savani, M. Sturm, O. Vorobiov, M. Oelker, S. Garreis, and P. Schuberth, “A2D2: Audi Autonomous Driving Dataset,” 2020. [Online]. Available: https://www.a2d2.audi
- J. Houston, G. Zuidhof, L. Bergamini, Y. Ye, A. Jain, S. Omari, V. Iglovikov, and P. Ondruska, “One thousand and one hours: Selfdriving motion prediction dataset,” 2020.
- A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese, “Learning social etiquette: Human trajectory understanding in crowded scenes,” in ECCV 2016.
- D. Yang, L. Li, K. A. Redmill, and Ü. Özgüner, “Top-view trajectories: A pedestrian dataset of vehicle-crowd interaction from controlled experiments and crowded campus,” in IV 2019. IEEE.
- P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y.-P. Flötteröd, R. Hilbrich, L. Lücken, J. Rummel, P. Wagner, and E. Wießner, “Microscopic traffic simulation using sumo,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2018, pp. 2575–2582.
- F. Rosique, P. J. Navarro, C. Fernández, and A. Padilla, “A systematic review of perception system and simulators for autonomous vehicles research,” Sensors, vol. 19, no. 3, p. 648, 2019.
- Q. Chao, H. Bi, W. Li, T. Mao, Z. Wang, M. C. Lin, and Z. Deng, “A survey on visual traffic simulation: Models, evaluations, and applications in autonomous driving,” in Computer Graphics Forum, vol. 39, no. 1. Wiley Online Library, 2020, pp. 287–308.
- F. Codevilla, E. Santana, A. M. López, and A. Gaidon, “Exploring the limitations of behavior cloning for autonomous driving,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9329–9338.
- M. Zhou, J. Luo, J. Villella, Y. Yang, D. Rusu, J. Miao, W. Zhang, M. Alban, I. Fadakar, Z. Chen et al., “Smarts: Scalable multi-agent reinforcement learning training school for autonomous driving,” in Conference on Robot Learning. PMLR, 2020.
- J. M. Scanlon, K. D. Kusano, T. Daniel, C. Alderson, A. Ogle, and T. Victor, “Waymo simulated driving behavior in reconstructed fatal crashes within an autonomous vehicle operating domain,” 2021.
- U.S. Department of Transportation Federal Highway Administration, “Next generation simulation (NGSIM) vehicle trajectories and supporting data,” 2016.
- G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “OpenAI Gym,” arXiv:1606.01540, 2016.
- M. Henaff, A. Canziani, and Y. LeCun, “Model-predictive policy learning with uncertainty regularization for driving in dense traffic,” in ICLR, 2019.
- L. Bergamini, Y. Ye, O. Scheel, L. Chen, C. Hu, L. Del Pero, B. Osinski, H. Grimmett, and P. Ondruska, “SimNet: Learning reactive self-driving simulations from real-world observations,” in International Conference on Robotics and Automation (ICRA). IEEE, 2021.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv:1707.06347, 2017.
- A. Breuer, S. Elflein, T. Joseph, J. Bolte, S. Homoceanu, and T. Fingscheidt, “Analysis of the effect of various input representations for LSTM-based trajectory prediction,” in ITSC 2019.
- P. de Haan, D. Jayaraman, and S. Levine, “Causal confusion in imitation learning,” Advances in Neural Information Processing Systems, vol. 32, pp. 11 698–11 709, 2019.