COMBINATION OF VISUAL METHODS AND CARTOGRAPHIC DATA FOR IMPROVING LANE-LEVEL NAVIGATION ACCURACY

Authors

DOI:

https://doi.org/10.30888/2663-5712.2025-34-01-044

Keywords:

lane-level navigation, autonomous vehicles, segmentation, HD-maps, deep learning, sensors, neural networks, optimization, Kalman filter, LiDAR

Abstract

The object of study is methods for ensuring high-precision lane-level navigation for autonomous vehicles in real road environments. The relevance of the work is due to the fact that at automation levels L4-L5, even an error of tens of centimeters can lead

References

Yoneda, K., Kuramoto, A., Suganuma, N., Asaka, T., Aldibaja, M., & Yanase, R. (2020). Robust Traffic Light and Arrow Detection Using Digital Map with Spatial Prior Information for Automated Driving. Sensors, 20(4), 1181. https://doi.org/10.3390/s20041181

Neven, D., Brabandere, B. D., Georgoulis, S., Proesmans, M., & Gool, L. V. (2018). Towards End-to-End Lane Detection: an Instance Segmentation Approach. 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, pp. 286-291, doi: 10.1109/IVS.2018.8500547.

Chen, X., Milioto, A., Palazzolo, E., Giguère, P., Behley, J., & Stachniss, C. (2019). SuMa++: Efficient LiDAR-based Semantic SLAM. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, pp. 4530-4537, doi: 10.1109/IROS40897.2019.8967704.

Gu, Z., Cheng, S., Wang, C., Wang, R., & Zhao, Y. (2024). Robust Visual Localization System With HD Map Based on Joint Probabilistic Data Association. IEEE Robotics and Automation Letters, vol. 9, no. 11, pp. 9415-9422, Nov. 2024, doi: 10.1109/LRA.2024.3457375.

Deo, N., & Trivedi, M. M. (2018). Multi-Modal Trajectory Prediction of Surrounding Vehicles with Maneuver based LSTMs. 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, pp. 1179-1184, doi: 10.1109/IVS.2018.8500493.

Levinson, J., & Thrun, S. (2010). Robust vehicle localization in urban environments using probabilistic maps. 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, pp. 4372-4378, doi: 10.1109/ROBOT.2010.5509700.

Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. arXiv preprint, doi: 10.48550/arXiv.2105.15203.

Kendall, A., & Gal, Y. (2017). What uncertainties do we need in Bayesian deep learning for computer vision? arXiv preprint, doi: 10.48550/arXiv.1703.04977.

Wang, H., Xue, C., Zhou, Y., Wen, F., & Zhang, H. (2021). Visual Semantic Localization based on HD Map for Autonomous Vehicles in Urban Scenarios. 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China, pp. 11255-11261, doi: 10.1109/ICRA48506.2021.9561459.

Guo, Y., Tian, Z., Li, B., Zhou, J., Yin, Z., Dong, Q., & Ying, S. (2025). Lane-level map matching for vehicles using mask-based raster high-definition maps. Expert Systems with Applications, vol. 287, 2025, ISSN 0957-4174, doi: 10.1016/j.eswa5.128195.

Published

2025-11-30

How to Cite

Древич, Л. (2025). COMBINATION OF VISUAL METHODS AND CARTOGRAPHIC DATA FOR IMPROVING LANE-LEVEL NAVIGATION ACCURACY. SWorldJournal, 1(34-01), 148–178. https://doi.org/10.30888/2663-5712.2025-34-01-044

Issue

Section

Articles