Guidance control of vehicles based on visual feedback via internet
© Lee. 2015
Received: 1 September 2014
Accepted: 19 May 2015
Published: 6 June 2015
The proposed paper investigates the vehicle guidance control based on the visual information which is given by the webcam mounted on the vehicle. With the help of image processing techniques such as binarization, Canny edge detection method, and Hough transform, the road lines are thus identified, which means the drivable area can be defined. The proposed control scheme adopts a simple algorithm to guide the vehicle to run inside the drivable area within the road lines. The computation of defining drivable area and control algorithm is operated in a control center which connects the vehicle via WiFi wireless communication system. Similarly, the image information transmits back to the control center. Furthermore, the control center can not only monitor the vehicle inside some certain area but also control the vehicle dynamically in real time. To simplify the experimental setup, the drivable area is defined as the superhighway which allows only cars on the road. Two experimental results, one in a straight road and the other in a curved road, are given to demonstrate the effectiveness of the proposed guidance control system.
Many visual-based control system for vehicle had been developed to detect the dangerous area, obstacle edges, and road lines in literatures [1, 2]. One of the detecting methods is line detection which proposes the drivable area embraced by the road lines; to detect the lines thus defines the drivable area. On the other hand, line detection also provides a comparison basis for safety driving to develop a warning system to avoid collisions [3–6]. Moreover, as wireless communication is being widely applied nowadays, modern vehicles are usually equipped with the wireless communication components such as GPS, webcam, and WiFi modules. Furthermore, control methodologies for the locomotion and turning guidance of a vehicle were also developed to make automatic driving feasible [7, 8].
In addition, during a long journey on the freeway, drivers easily fall asleep during their driving because of the road being usually straight and smooth in a period of time. The proposed vehicle guidance control system is developed based on the visual feedback images in front of the vehicle. When the images are transmitted back to the control center, the image processing program first analyzes those images and identified the road lines; after that, the guidance control algorithm will generate the control signals to keep the vehicle running inside the drivable area, which means to stay within the road lines. Then, the control center will send the control signals to the vehicle to guide the vehicle how to move.
In the image processing process, after binarization, Canny edge detection process and Hough transform are applied to identify the edges of the road lines and plot them on the display. In order to easily calculate the distances between vehicle and road lines, the position of the webcam is mounted on the center of the vehicle with a certain height. By using a simple feedback control technique, the vehicle automatically modifies its yaw angle to navigate the vehicle to move on the right trail inside the drivable area. The proposed visual-based guidance control system also provides two experimental results to validate the performance. One is on the straight road and the other is on the curved road. Both experimental results are satisfactory. Recently, 3D techniques were taken into the design for vehicle guidance to improve the performance of vehicle control with satisfactory performance [8, 9]. On the other hand, the real time issue also played an important role due to the rapid changing of the environment . Therefore, the proposed control scheme for vehicle guidance not only provides the vehicle control based on the visual feedback via the Internet but also considers the real time issue.
2 System structure
2.1 Self-propelled vehicle
2.2 Overall system architecture
2.3 Control process of overall system
3 Guidance control
In Fig. 5, the control process only activates if the connection between control center and the vehicle is successful. Otherwise, the control priority remains in the driver. As the algorithm begins, the vehicle will try to keep its position on the center of the road. In order to respond to the error quicker, a set of warning lines are thus designed to keep the vehicle inside the drivable area. The design of the warning lines can compromise the time delay during the transmission of the newest control signals. The abovementioned algorithm includes the guidance law, server algorithm, and the client algorithm to assure successful guidance.
Actually, the warning lines do not appear in the real image. The warning lines were only designed in the program for guidance control of the vehicle.
4 Experimental results
Two experiments are conducted based on the proposed visual-based vehicle guidance control system. They are described as follows.
4.1 Straight lane case
4.2 Curved lane case
The same as the previous straight lane guidance, the experimental result of curved lane guidance is shown in Fig. 10. The lower right part of the Fig. 10a shows the image of the webcam after line detection. The road lines are marked in red as well as the center of the vehicle is marked in blue. The speed of the vehicle is a little bit slow in the curved lane guidance due to the safety concern. The lines are defined as [−100 100], the center is defined as 0, and the warning lines are defined as [−50 50] to show the effectiveness of the proposed guidance law in Fig. 10b. In the curved road case, the trajectory of the vehicle is trembler than the trajectory in the straight road case.
A visual-based vehicle guidance control has been developed in the proposed paper with experiments. First, the image information of the road is acquired by the webcam mounted on the self-propelled vehicle and sent to the control center via wireless network. Then, the road lines are detected after a series of image-processing techniques. Moreover, a set of warning lines are also designed to prevent the vehicle from crossing the road lines. Based on the knowledge from the webcam, a simple guidance control law is also provided to control the vehicle to move inside the drivable area. Two experimental results are also given to show effectiveness of the proposed visual-based vehicle guidance control system with satisfactory performance.
The author would like to thank Jhe-Yu Guan, Wei-Liang Chen, Shao-Hsuan Hsu, Hao-Hsiang Yang, and Li-Wei Liu for their help in the experimental setup and implementation.
- A Broggi, M Bertozzi, A Fascioli, C Guarino Lo Bianco, A Piazzi, Visual perception of obstacles and vehicles for platooning. IEEE Trans. Intell. Transp. Syst. 1(3), 164–176 (2000)View ArticleGoogle Scholar
- AG Mohapatra, Computer vision based smart lane departure warning system for vehicle dynamics control. Sens. Transducers. J. 132(9), 122–135 (2011)MathSciNetGoogle Scholar
- B Yu, W Zhang, Y Cai, A lane departure warning system based on machine vision, in 2008 IEEE Pacific-Asia Workshop Computational Intelligence Industrial Application. 1, 197–201 (2008). doi:10.1109/PACIIA.2008.142
- OO Khalifa, R Islam, A Am Assidi, A-H Abdullah, S Khan, Vision based road lane detection system for vehicles guidance. Aust. J. Basic Appl. Sci. 5(5), 728–738 (2011)Google Scholar
- W Li, Human-like driving for autonomous vehicles using vision-based road curvature modeling. Int. J. Hybrid. Inform. Technol. 6(5), 103–116 (2013). doi:10.14257/ijhit.2013.6.5.10 View ArticleGoogle Scholar
- C Kreucher, S Lakshmanan, K Kluge, A driver warning system Based on the LOIS Lane Detection Algorithm. Proc IEEE Int Conference on Intelligent Vehicles, 1998, pp. 17–22Google Scholar
- ED Dickmanns, N Muller, Scene recognition and navigation capabilities for lane changes and turns in vision-based vehicle guidance. Control. Eng. Pract. 4(5), 589–599 (1999). doi:10.1016/0967-0661(96)00041-X View ArticleGoogle Scholar
- KR Liewellyn, Visual guidance of locomotion. J. Exp. Psychol. 91(2), 245–261 (1971). doi:10.1037/h0031788 View ArticleGoogle Scholar
- ED Dickmanns, T Christians, Relative 3D-state estimation for autonomous visual guidance of road vehicle. Robot. Auton. Syst. 7(2–3), 113–123 (1991). doi:10.1016/0921-8890(91)90036-K View ArticleGoogle Scholar
- Z Hu, K Uchimura, Real-time data fusion on tracking camera pose for direct visual guidance. 2004 IEEE Intelligent Vehicle Symposium, 2004, pp. 842–847. doi:10.1109/VS.2004.1336494 Google Scholar
- H-T Lee, W-C Lin, C-H Huang, Wireless indoor surveillance robot with a self-propelled patrolling vehicle. J. Robotics. 2011, 9 (2011). doi:10.1155/2011/197105 Google Scholar
- R Joˇsth, M Dubská, A Herout, J Havel, Real-time line detection using accelerated high-resolution Hough transform, Proc 17th Scandinavian Conference Image. Analysis 6688, 784–793 (2011). doi:10.1007/978-3-642-21227-7_73 Google Scholar
- B Alberto, M Bertozzi, A Fascioli, C Guarino, L Bianco, A Piazzi, The Argo autonomous vehicle’s vision and control systems. Int. J. Intelligent. Control. Syst. 3(4), 409–441 (1999)Google Scholar
- H Cheng, N Zheng, C Sun, H van de Wetering, Vanishing point and gabor feature based multi-resolution on-road vehicle detection. Proc Third Int Conference Advances Neural Networks 3973, 46–51 (2006). doi:10.1007/11760191_7 Google Scholar
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.