Figure 1 - uploaded by Jibril Abdullahi Bala
Content may be subject to copyright.
Computer Vision Pipeline a. Image Acquisition: This involves acquiring the visual feed and images from the camera. The raspberry pi V2 camera is being envisaged as the prospective image acquisition device. This choice is based on the features of the camera which include its ability to take still images of a resolution of 3280x2464 (8MP) or record 4K video at a frame rate of 30 frames per second (FPS). The video feed coming in to the microcontroller will be evaluated frame by frame. The processing power of the raspberry pi controller board will be capable of handling the frame rate of the video feed. b. Image Pre-processing: After the acquisition of the images, the images need to be pre-processed before further processing. The image acquired in step (a) is in the Red-Green-Blue (RGB) format. The first step will be colour space conversion, in which the image will be converted from an RGB colour format to grayscale. Equation 1 shows the conversion process from RGB to grayscale. I = 0.3R + 0.59G + 0.11B (1)

Computer Vision Pipeline a. Image Acquisition: This involves acquiring the visual feed and images from the camera. The raspberry pi V2 camera is being envisaged as the prospective image acquisition device. This choice is based on the features of the camera which include its ability to take still images of a resolution of 3280x2464 (8MP) or record 4K video at a frame rate of 30 frames per second (FPS). The video feed coming in to the microcontroller will be evaluated frame by frame. The processing power of the raspberry pi controller board will be capable of handling the frame rate of the video feed. b. Image Pre-processing: After the acquisition of the images, the images need to be pre-processed before further processing. The image acquired in step (a) is in the Red-Green-Blue (RGB) format. The first step will be colour space conversion, in which the image will be converted from an RGB colour format to grayscale. Equation 1 shows the conversion process from RGB to grayscale. I = 0.3R + 0.59G + 0.11B (1)

Context in source publication

Context 1
... steps involved in the Computer Vison pipeline are shown in Figure 1 and elucidated subsequently. This involves acquiring the visual feed and images from the camera. ...

Similar publications

Article
Full-text available
Autonomous driving is an outstanding representative of artificial intelligence, and while it has unlimited opportunities, it also faces unprecedented challenges. The potential risks of autonomous vehicles cause a significant threat to our current traffic system. Multi-sensor data in autonomous vehicles will form scene semantics. When sensors are in...
Preprint
Full-text available
p>Accurate trajectory prediction is vital for various applications, including autonomous vehicles. However, the complexity and limited transparency of many prediction algorithms often result in black-box models, making it challenging to understand their limitations and anticipate potential failures. This further raises potential risks for systems b...
Article
Full-text available
Reliability and supervision of autonomous systems and their corresponding subsystems, can be significantly improved using advanced methods of anomaly detection and fault diagnosis. A reliable fault detection module can improve the autonomy of the vehicle itself, by leading to efficient fault tolerant control designs and mission scheduling. This pap...
Article
Full-text available
This research applies deep neural networks (DNN) and convolutional neural networks (CNN) to the modeling and prediction of driving behavior in autonomous vehicles within the Smart City context. Developed, trained, validated, and tested within the Keras framework, the model is optimized to predict the steering angle for self-driving vehicles in a co...
Preprint
Full-text available
Tremendous progress in deep learning over the last years has led towards a future with autonomous vehicles on our roads. Nevertheless, the performance of their perception systems is strongly dependent on the quality of the utilized training data. As these usually only cover a fraction of all object classes an autonomous driving system will face, su...