We’ve already had fatalities thanks to Tesla ‘auto-pilot’.

Does AI driving systems go more miles without accidents or fatalities than a human?

Uber had a high profile case where the AI and human failed together too. Humans are still fully responsible for their vehicles

Based on my own near misses with driver assist technology I'd actually be interested to know the answer, because I've had some pretty scary incidents when things like lane markings are faded etc.

I saw a video of a Tesla swerving around a jaywalking pedestrian the other day too, relatively high speed, also scary

It is interesting to compare the various systems safety pages

- https://waymo.com/safety/

- https://www.tesla.com/safety

- https://www.ford.com/technology/driver-assist-technology/

- https://www.gmc.com/connectivity-technology/super-cruise

- https://www.chevrolet.com/support/vehicle/driving-safety/dri...

- https://github.com/ApolloAuto/apollo (Baidu based)

- https://github.com/commaai/openpilot/blob/master/docs/SAFETY... | https://blog.comma.ai/understanding-the-openpilot-safety-mod....

The last two's minimalism around safety is not reassuring