Addressing Critical Challenges in Connected Autonomous Vehicles


Critical considerations pertinent to connected autonomous vehicles (CAVs) such as ethics, liability, privacy, and both physical and cybersecurity do not share the same spotlight as the benefits.

More often, the benefits take the front page, including reductions in fatal accidents, fuel consumption, and traffic congestion. Of course, all of these are attractive. But the challenges are equally worthy of attention.

The increasing use of artificial intelligence (AI) opens questions about a CAV’s ability to make quick and correct decisions in potentially fatal accident situations, based on the system’s observance of human physical features. And how will CAVs be able to navigate the myriad traffic regulations that can change dramatically from one locale to the next—and from moment to moment?

In a trio of recent IEEE SA webinars, experts working in diverse technologies discussed several matters surrounding autonomous mobility, topics not often found in the press.


When it comes to ethics, the main focus seems to be on how AI helps a CAV recognize people, objects, and situations. Raja Chatila, Professor Emeritus at Sorbonne University and member of the French National Pilot Committee for Digital Ethics, points to one early example. This involved training an AI system to recognize similar-looking humans, but training did not include dark space. As a result, the system could not identify people of color, a situation that would prove extremely disastrous in autonomous driving applications.

Probably the most controversial ethics issue is the belief that CAVs should be able to make priority life-saving decisions similar to those presented in a popular experiment focusing on ethics and psychology called the ‘trolley problem’. The driver of a trolley faces an imminent collision on the track ahead and has only two options: do nothing and hit five humans on the track or pull a lever to switch the track over to a different direction, which would set the trolley on a collision course with a single human victim.

In reality, CAVs don’t need to make ethical and moral decisions. Instead, they must assess who and/or what is at greater risk and adjust operations to eliminate or minimize risk, damages, injuries, and death. Ethically speaking, CAVs, via machine learning or AI, must perform accurate risk assessment based on objective features and not on characteristics such as gender, age, race, and other unique human identifiers[1].


If a CAV is involved in or causes a serious accident, who is at fault? The vehicle, the human driver, the manufacturer? Obviously, if the vehicle has a manufacturing defect that’s not addressed with a reasonable recall, then the manufacturer should assume a greater level of responsibility. However, if that’s not the case, the lines of liability become blurred.

There are six levels of automotive autonomy:

  • level 0 (complete driver control, no automation, driver mandatory)
  • level 1 (minor automation, e.g., cruise control, driver intervention required)
  • level 2 (partial automation, ADAS, steering and acceleration control, driver intervention required)
  • level 3 (environmental detection, vehicle can perform most driver tasks, driver intervention required)
  • level 4 (extensive automation, driver intervention is optional)
  • level 5 (full driving capabilities, requires no driver intervention or presence)

The CAV industry is not yet up to fully automated, true-autonomous levels 4 and 5. Accurately speaking, levels 1 to 3 are describing automated—not autonomous—vehicles, as all three require an onboard driver who can take control of the vehicle at any point.

The question then remains, who or what is liable in the event of an accident? The manufacturer may claim that, as manual control of the vehicle is available, the driver is liable. The driver, however, may claim some malfunction of the manufacturer’s automated system is to blame. Finger pointing is not the solution.


Privacy and cyber security issues have become ubiquitous in every application with CAVs, posing their own unique concerns. A vehicle need not be autonomous to experience potential privacy invasions. All that’s necessary for intrusion is a GPS tracking system and/or one or more occupants with a smartphone. And as both technologies rely on software, cyber security is often questionable at best.

Obviously, CAVs employ massive amounts of software which require regular updates that extend the vehicle’s existing functionality while also adding new functions. Most likely, the vehicle performs updates wirelessly via 5G. However, anything employing wireless connectivity is fair game for hackers and cybercriminals. In a worst-case scenario, a hacker could take control of a CAV with passengers onboard. So far, these situations have not been widespread, but more work and due diligence are necessary to stay ahead of the hackers.

Conversely, CAVs collect large amounts of data in the form of images of pedestrians without the pedestrians’ or CAV owner’s consent. At this time, there are no regulations on how much data is collectible and on who can access that data and how it’s distributed and stored. Essentially, this data is usable for a plethora of purposes, again compromising human privacy. Paired with the ability to transmit these images wirelessly, this aspect also leaks into the ethics domain.


Anyone who has driven from one city to next, even within the same state, knows just by following road signs that speed limits change, lanes merge or widen, detours are common, and any manner of change is possible. Humans adjust to these changes quite easily by merely observing signs or taking direction from police. But how would a CAV address these changes?

Initially, with all the advancements in camera, ADAS, software, and sensor technologies, the basics should be easy to tackle. Cameras and image sensors can transmit graphic data to software that will instruct the vehicle to adjust speed, stop, change lanes, or conduct other basic driving functions. If the perimeter of travel is not too complex, these solutions would be fine.

In the US, driving from state to state offers great challenges in that rules and laws of the road change dramatically, many of which do not always appear on road signs. Therefore, CAVs will need to know and understand those rules for each state they drive through.

Driving in other countries presents even greater challenges, such as which side of the road to drive on. Bring your US-based CAV to the United Kingdom without some software or other intervening technology and try to navigate the infamous London traffic circle in the correct lane.

One possible solution is for various jurisdictions to provide downloadable software libraries that contain their laws of the road, thereby allowing CAVs to work within those laws. Similar to the analog days when motorists purchased roadmaps, CAV owners could purchase and download the traffic laws and regulations matching their motoring itinerary and travel with no or minimum legal interference.


Without question, CAVs have a great future in the mainstream, but open issues concerning safety, ethics, cybersecurity, transparency, and compliance challenges still need to be addressed. Adoption of standards such as IEEE 2846™-2022 would be a way to address some of these challenges. Manufacturers who choose to do so would improve their products’ reliability and demonstrate careful consideration in the process.

Learn more about this topic and view the webinars upon which this article was based


[1] AI and Autonomous Driving: Key ethical considerations by Franziska Poszler and Maximilian Geißlinger

Share this Article