The lack of technical comprehension in the automotive industry and government regarding AI risks is concerning.
Both language models and self-driving cars use statistical reasoning to make decisions, but while a language model may give nonsense, a self-driving car can be deadly.
Human errors in coding have replaced human errors in operation, and faulty software in autonomous vehicles has caused crashes.
AI failure modes are difficult to predict, leading to unexpected behaviors like phantom braking in self-driving cars.
Source: https://spectrum.ieee.org/self-driving-cars-2662494269
[link] [comments]