Despite all the hype, driverless vehicles have some big problems that definitely will not be solved anytime soon. As someone that studies computer science and artificial intelligence, it is quite perplexing to me how legislators have made it possible for these driverless vehicles to even be on the road legally. They are a huge safety risk and have immense problems answering questions few people even ask, and in fact these are simple, straightforward, and basic questions that do not necessarily involve the computing technology directly. Below is a sample listing of some of these problems, there are others. I hope you will conclude moving forward with this technology should be done experimentally in a laboratory, and not on the open roadway and interstates with human motorist safety at stake.
In the case of an accident, especially a major accident involving injuries, who is responsible? Is it the owner of the vehicle? The corporation that built the vehicle? The writer of the software driving the vehicle? Or some other entity; the vehicle itself perhaps? Can an autonomous robot be held legally liable for a legal infraction?
Safeguarding the security of the cargo of the vehicle is one of the additional jobs of the human operator. In the case of a self driving vehicle, how can the cargo best be secured? Would you trust your loved ones, especially children, to be transported by a driverless vehicle?
Human-to-human communication during the driving process is not possible. How often have you needed to take direction from a rolling down of the window, from a police officer, EMT, other driver, or just to provide information to a questioning motorist? Also there are instances of handwaving at stop signs, and gesturing, human-to-human communication visually at key moments to prevent an accident.
Walk around inspection of the vehicle is not possible prior to operation. Sometimes it is important to inspect tires, fenders, glass, operation of lighting and other features, prior to and during vehicle operation, especially on long trips. Driverless vehicles may have no knowledge a taillight, brake light, license plate light, or other important feature is malfunctioning during operation.
During a police traffic stop, who is supposed to communicate with law enforcement to rectify a legal situation or infraction? Who is the officer supposed to issue the ticket to? In the case of erratic or dangerous driving (something very possible with today’s driverless software) once again who is responsible and what are the penalties?
How is auto and liability insurance to be calculated based on the driver’s record? Or the version of the software in question? Or the hardware currently in use?
Information Security — All these vehicles and computer systems in general are subject to being hacked, malwared, and the target of malicious actors. In one case computer security researchers placed black electric tape over a speed limit sign to convert the 30 mph to an 80 mph limit. The driverless vehicle sped right up to 80 mph without considering it was an obvious residential zone. Attacks like this can take place from outside the vehicle, from inside the vehicle, and from through the internet to completely compromise the vehicle.
During routine operation, human drivers are called upon to perform various and expected physical tasks. One of these is to place warning markers around the vehicle after a breakdown or accident. Another is to fix a flat tire. Also to clean off debris from a dirty license plate, headlight, taillight, or to remove something dragging from the vehicle. None of this is possible with a fully autonomous driverless vehicle.
In the case of not knowing what to do, many driverless vehicles are programmed simply to pull over and stop operation. With a proliferation of many driverless vehicles on the road at the same time, if they fall into the same unknown situation simultaneously, it will result in all of them pulling over to the side and creating their own autonomous traffic jam that will be hard to clear up. This is because no one can communicate with the vehicles to give further instruction, except the operator who may be thousands of miles away. Unless of course you give a special law enforcement interface, which will once again be the subject of hacking to gain control by malicious operators. This is an unsolvable chicken-and-egg problem you will not hear very much about from conflicted interest proponents.
How can the promises of driverless vehicle safety possibly be believed, when no completely safe autonomous system has yet to be demonstrated? At least to the level of safety of a trained and experienced human operator.
A good example of what will most likely ultimately happen with driverless vehicles can be demonstrated from the experience with the commercial airline industry over the last 40 years. Autopilot systems are capable of taking the aircraft off, navigating it to its destination, and landing it just about as good or better than any human pilot available. And yet the human pilots, sometimes multiple human pilots, have not gone anywhere out of the cockpit. This is because of all of the similar reasons to those listed above. This is something artificial intelligence researchers call the “last 10% problem”. You can automate most of the tasks, but there is always about 10% still remaining a human must be able to be available to handle. Doing a before flight walkaround of the aircraft, for example. Communicating with the flight crew and tower as well. Handling emergency decisions and actions as well.
Similar tasks in autonomous vehicles will make sure that human operators, especially truck drivers, will not be leaving the driver seat anytime soon. The proper vision for artificial intelligence in vehicles is to empower the human operator, make their job easier, and give them additional important information they would not have otherwise. If nothing else, the truck driver is a cargo security and logistics officer first and foremost, personally responsible for their charge.
The whole goal of trying to replace the driver or eliminate the driver is misguided and will result in bad business results, decreased safety, and lives lost. Let us hope the legislators in multiple states stop listening to the hype delivered to them by technology stock pumpers and science fiction evangelists, and look at the facts keep our roads, and jobs, as safe as they should be. A computer game crash is not that same as a semi-tractor trailer crash.
— — —
Dr. Michael J. Pelosi is assistant professor of computer science at Texas A&M University-Texarkana.