No, it doesn't. Google's cars, despite their engineering might and insane amount of training miles racked up, are very inclined to stop when in doubt (for example because someone skateboarding the other way on the other side of the road trips their "idiot detector", or because it sees a cyclist doing a track-stand and assumes they're moving because their feet are off the ground). The google cars are 100% safe, polite drivers but in order to be safe have to err on the side of caution and stop when in doubt, this would create unacceptable delays on a railway. As well as the luxury of stopping when in doubt, they also don't have any of the secondary responsibilities outlined by lineclear in post #91. And for those thinking we're on the cusp of technology to comprehensively address object detection, there is no technology in existence that allows a computer to apply logical reasoning to an unforseen situation. Neural networks can recognise the familiar very well, although doing so requires colossal amounts of training data, but can't really do anything with the unfamiliar. A way of introducing automation without causing delays from excessive caution would be to have a fully attentive human driver backing up the AI. At that point there isn't a huge amount the AI could be expected to add, maybe enabling perfect power and braking if the conditions are perfectly known. Someone mentioned AIs that work with doctors to help with diagnosis and it's true that they sometimes flag things that would otherwise have been missed and are therefore helpful, but they also miss plenty of things a human would spot and raise plenty of false positives. Again the key to their effective use is that they work with a human, not instead of.