Impact & Opinions | Tionchar & Tuairimí

Are We There Yet? Reflections on Artificial Intelligence in Autonomous Cars
edition-image
AI & Human Creativity

Are We There Yet? Reflections on Artificial Intelligence in Autonomous Cars

16 December 21
 | 
0
(0)
 STARS
 | 15 MINS

Nothing raises questions around the mingling of human and technology quite like autonomous cars. With multiple levels of AI involved, at what point does human judgment intervene? Prof Edward Jones explores the challenges of this rapidly developing technology.

As in many other areas of life, artificial intelligence (AI) is being applied increasingly in the field of transportation.

One sector where this is very apparent is the development of technology for autonomous and semi-autonomous passenger cars, i.e., “cars that drive themselves”. They do this based on the use of sensors that monitor the exterior of the vehicle, combined with sophisticated AI technology that processes this information and tries to make sense of it. This could include detection of pedestrians and other road users, road features, prediction of collisions etc. And on that basis, the car navigates safely to its destination. There are high levels of expectation for this technology, and equally high levels of scepticism about what it can and cannot do. It has been a topic of much interest for some time now, so it’s worth reflecting on the interplay between the technology and the humans that are expected to benefit from its use, and to reflect on some of the similarities and differences between person and machine.

It’s important to acknowledge too that a failure to appreciate the interface between person and machine can have negative consequences and that it can often be the case that “blind” application of AI is not always the best solution.  As the old saying goes, when all you have is a hammer, everything looks like a nail. But while there’s no doubt that AI technology can solve many problems (even better than humans can in many cases), the potential for confusion between human and machine also needs to be carefully considered.

One important – if obvious – point to make at the outset is this: it’s not easy to make cars drive themselves. There have been impressive developments in this technology, but there remain many challenges. Interestingly, some of the areas that are proving difficult to crack are the same ones where human drivers also struggle. A good example is driving in bad weather, or at night, or in bad weather at night. All drivers have experienced the stress and fatigue associated with driving in such conditions: it’s hard to see what’s happening outside, to make sense of all of the imperfect information that your brain is expected to process, and to make a sensible decision on that basis. To a degree, the same also holds true for AI technology. This highlights one way in which driver and technology are similar. As drivers, we take information from our “sensors” – eyes, ears, hands on the steering wheel so you can “feel” the road – and we use our brain to process all that information and make a decision. A driverless car also takes information from its “sensors” – cameras, radar and other sensors – and also uses AI to process all of that information to make a decision. In both cases, a good decision depends on all of the pieces working well – sensors and brain. So having good sensors (whether human or machine) is at least as important as having a brain that works properly. In this context, perhaps human and machine are not all that different after all.

 

Are We There Yet? Reflections on Artificial Intelligence in Autonomous Cars

A further important (even critical) point is that users must develop a proper understanding of what a given piece of technology is actually supposed to do. For example, a common source of confusion lies in the exact definition of “autonomous” or “driverless”. Many people, when they think of autonomous vehicles, do so in terms of a vehicle that does 100% of the work, with the driver doing nothing and in effect becoming a mere “occupant”. However, there are many levels of autonomy, and limited forms of autonomy are already widely available. A five-point scale (developed by the Society of Automotive Engineers) is often used to describe the different levels of autonomy, ranging from Level 0 (fully manual) to Level 5 (fully autonomous). Common functions such as adaptive cruise control and lane keeping assist can be considered Level 1 or 2, and indeed are sometimes regarded as “driver assistance” functions rather than autonomous functions. Going up the scale, there is an increasing level of “handover” from the human driver to the vehicle, and this includes features that allow the car to operate autonomously in some conditions. However, in the case of much of the technology that is currently available in the market, the driver is still responsible for the safe operation of the vehicle. Even where the car is driving itself, the driver must still pay full attention and must be in a position to take over control immediately if necessary. This point has been central to some of the high-profile accidents (some fatal) involving autonomous cars in the past few years, where in at least some cases, the technology failed to react as intended, but the driver was also not paying attention as they should have been. So, understanding exactly what the technology can do, and equally what the driver is expected to do, is critical. Like any other area of life where artificial intelligence is used, it’s important to recognise that it has its limitations.

For that matter, what do we even mean by “intelligence”? There’s the “traditional” notion of intelligence (“good at the books”), which generally refers to the ability to handle, analyse and process factual information, and to draw conclusions and solve problems. AI is generally extremely good at this sort of task – in many cases, it is even more effective than humans, who often struggle with handling very large amounts of information at once. To all intents and purposes, AI technology solves such data-driven tasks by “learning” the relationships between input data from sensors and the corresponding decision. We also have a slightly fuzzier form of intelligence, which could be considered to be the ability to handle interactions and relationships with other people (this is often referred to as emotional intelligence). How is this relevant to autonomous cars? Well, to give one example, most or all drivers have been in a situation where they’re approaching a junction where they want to turn right (in jurisdictions where you drive on the left hand side of the road, like Ireland) but there’s oncoming traffic which has right of way.

Objective logic (and the rules of the road) dictates that you wait until the oncoming traffic has passed and the road is clear to turn right (even if you’re causing an obstruction and traffic is backing up behind you). But then an oncoming driver flashes their lights at you. And you know straight away they’re giving you the all-clear to turn across the traffic so you can go on your way, and so can the traffic that was backed-up behind you. Everyone’s happy. On the other hand, an AI-based system in a driverless car may spot the flashing lights, but it may not be able to interpret the exact intent of the gesture like a human driver would.

Of course, the reason human drivers can manage this situation is at least partly because they, too, have “learned” what the flashing lights gesture means based on experience, so there’s no reason in principle why an AI system could not also “learn” how to handle it. But the need for lots of “experience” to handle this situation is key for AI. This type of scenario is one of those subtle situations that “breaks the rules”, and yet is often critical to smooth traffic flow, and that humans generally handle quite well. AI systems still have some way to go before they can approach human ability to interpret such subtle interactions. That said, even humans aren’t always able to handle these situations – what do you when the oncoming car slows down, but doesn’t make an explicit gesture like flashing their lights? Are you really sure they’re pausing to allow you to cross? So humans don’t always get it right either.

So, what does the future hold? There’s no doubt that the use of AI and associated sensors in vehicles can bring enormous benefits, in terms of managing safety and enabling more efficient use of transportation infrastructure (particularly important in the drive towards sustainability). At the same time, it’s important to recognise the scale of the technical challenge, but also to recognise the seemingly boundless capacity of researchers to innovate. And finally, it’s important for all – technology developers and users alike – to understand the interplay between human and technology on the road, and where each has its strengths and limitations, particularly in those subtle everyday interactions (and don’t forget to wave to the oncoming driver when you’re turning right across the traffic).

 

Profiles

profile-photo
Prof Edward Jones

Edward Jones is Head of the School of Engineering, and is a Personal Professor in Electrical and Electronic Engineering at NUI Galway. He holds BE and PhD degrees in Electronic Engineering, with a specialisation in signal processing. He was previously Vice-Dean of the College of Engineering and Informatics at NUI Galway, and has also held a number of senior R&D positions in Irish and multi-national companies, including Toucan Technology, PMC-Sierra, Innovada, and Duolog Technologies. His current research interests are in the application of signal processing and machine learning technologies in the domains of connected and autonomous vehicles, and bio-signal processing for healthcare. He is a Chartered Engineer and a Fellow of the Institution of Engineers of Ireland.

RATE

0 / 5. Vote count: 0

Discover More
edition-image
SDG Champion

Focal ón Uachtarán

Keep up to date on the latest from us straight to your inbox

Privacy policy