Our next project in this module is to create a dashboard screen for autonomous vehicles, hence I’ll do my best to learn the most about autonomous vehicles and understand how they work in order to help myself have a better understanding of the task ar hand and how to create a proficient UX/UI for the dashboard. I am very happy to commence this project and I know it will entail a lot of challenges, I’m looking forward to gaining the new skills and knowledge it provides.


When did it start

The history of autonomous vehicles began in the 1960s and 1970s when Japanese Tsukuba Mechanical Engineering Laboratory designed the first self-driving car that did not rely on rails or wires under the road. The car navigated itself via analog computer technology that would process signals received through two cameras.

How do they work

An AV is a car that uses technology to partially or entirely replace a human driver when it comes to navigating a car from point A to point B whilst responding to hazards and traffic signs and conditions, these cars see their surroundings by using three main eyes; radar, cameras, and laser based LiDar which stands for “light detection and ranging”, these systems feed data into on-board processors, using sophisticated software, algorithms and machine learning to send signals to the vehicle’s actuators to trigger appropriate actions such as braking, steering and acceleration. Thanks to these complicated methods of detection AVs can detect lane markings, curbs, pedestrians, cyclists or other vehicles around. For accurate detection the system uses the cameras on the car, the LiDar, which bounces light impulses off surrounding objects, and the radar to detect and track objects, and determine their distance, direction, and velocity.

Cameras on AVs detect many frequencies of visible light, similar to how the human eye sees light. AVs have a 360 degree view of their surrounding, and with their AIP (advanced image processing) they can detect and recognize objects, such as other vehicles, lane markers, and road signs. They can also measure the distance between objects and the vehicle. The AIP degrades when visibility is poor, the system can also find it hard to interpret the input if there is a sticker placed on road signs or on the road; and this is why there are two other methods of detection to help the car self-navigate.

Radars work by sending out a burst of radio waves which travel in a straight line until they hit an object, which causes them to reflect back to the radar's antenna, the radar system is them able to measure how long it took for the echo to return, the duration helps it calculate the distance, the direction the object is in, which direction it’s moving in and and what speed it’s moving in.

A LiDAR scanner sends out rapid pulses of laser light. This system basically works like the radar but instead of radio waves it sends laser light pulses and the distance is measured by the time lapse between outgoing light pulse and the detection of the reflected light pulse.

image.png

This image perfectly displays the role of each system, and as seen the “2 eyes” autonomous vehicles have enable them to see far ahead and far behind them, allowing them to relay the information gathered to their GPS system, of course some of these systems can be hindered and that’s what makes AVs not 100% safe, their reliability can be diminished when lane markings are covered by snow or other heavy precipitation

Levels of Driving

image.png

Throughout my research I’ve also come to learn that there are various levels when it comes to autonomous driving. The image above explains in detail what each level does.

Level 0: This level has 0 automation whatsoever

Level 1: The driver is assisted with steering, braking, acceleration, lane assist and adaptive cruise control. The driver is required for all critical functions. I think most cars on the road these days are on level.

Level 2: This level of driving is partially autonomous with at least simultaneous tasks being managed by the vehicle in specific scenarios