Deep Neural Network for Visual Localization of Autonomous Car in ITS Campus Environment

Rudy Dikairono, Hendra Kusuma, Arnold Prajna

Abstract


Intelligent Car (I-Car) ITS is an autonomous car prototype where one of the main localization methods is obtained through reading GPS data. However the accuracy of GPS readings is influenced by the availability of the information from GPS satellites, in which it often depends on the conditions of the place at that time, such as weather or atmospheric conditions, signal blockage, and density of a land. In this paper we propose the solution to overcome the unavailability of GPS localization information based on the omnidirectional camera visual data through environmental recognition around the ITS campus using Deep Neural Network. The process of recognition is to take GPS coordinate data to be used as an output reference point when the omnidirectional camera takes images of the surrounding environment. Visual localization trials were carried out in the ITS environment with a total of 200 GPS coordinates, where each GPS coordinate represents one class so that there are 200 classes for classification. Each coordinate/class has 96 training images. This condition is achieved for a vehicle speed of 20 km/h, with an image acquisition speed of 30 fps from the omnidirectional camera. By using AlexNet architecture, the result of visual localization accuracy is 49-54%. The test results were obtained by using a learning rate parameter of 0.00001, data augmentation, and the Drop Out technique to prevent overfitting and improve accuracy stability.


Full Text:

PDF

References


ITS, “i-Car, Kado Spesial ITS untuk Indonesia”, https://www.its.ac.id/news/2020/08/17/i car-kado-spesial-its-untuk-indonesia/ (accessed Oct 21, 2021)

Kompas, “ITS Luncurkan iCar, Mobil Listrik Otonom Tanpa Pengemudi”, https://www.kompas.com/edu/read/2020/08/23/183000471/its-luncurkan-icar-mobil listrik-otonom-tanpa-pengemudi?page=all, (accessed Oct 21, 2021)

D. Scaramuzza, “Omnidirectional Camera,” 2014, doi: 10.5167/UZH-106115

H.-Y. Lin and C.-H. He, “Mobile Robot Self-Localization Using Omnidirectional Vision with Feature Matching from Real and Virtual Spaces,” Appl. Sci., vol. 11, no. 8, p. 3360, Apr. 2021, doi: 10.3390/app11083360.

J. A. Larcom and H. Liu, “Modeling and characterization of GPS spoofing”, 2013 IEEE International Conference on Technologies for Homeland Security (HST), pp. 729-734, 2013.

A. Mulla, J. Baviskar, A. Baviskar and A. Bhovad, “GPS assisted Standard Positioning Service for navigation and tracking: Review *& implementation”, 2015 International Conference on Pervasive Computing (ICPC), pp. 1-6, 2015.

Tianhao Bai, Bingqing Mei, Long Zhao, Xiaodong Wang, “Machine Learning-Assisted Wireless Power Transfer Based on Magnetic Resonance”, Access IEEE, vol. 7, pp. 109454109459, 2019.

Jingjie Xin, Xin Li, Yongjun Zhang, Lu Zhang, Jianghua Wei, Shanguo Huang, “DNN based Multi-Faults Localization for 5G Coexisting Radio and Optical Wireless Networks”, the Design of Reliable Communication Networks (DRCN) 2021 17th International Conference on, pp. 1-6, 2021.

Wei, J. (2020, September 25). AlexNet: The Architecture that Challenged CNNs. Medium. https://towardsdatascience.com/alexnet-the-architecture-that-challenged-cnns-e406d5297951

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems,25.https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html

An Overview of ResNet and its Variants | by Vincent Feng | Towards Data Science. (n.d.). Retrieved June 11, 2022, from https://towardsdatascience.com/an-overview-of-resnet-and-its-variants-5281e2f56035

Residual Networks (ResNet)—Deep Learning—GeeksforGeeks. (n.d.). Retrieved June 11, 2022, from https://www.geeksforgeeks.org/residual-networks-resnet-deep-learning/




DOI: https://doi.org/10.12962/jaree.v7i2.365

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.