The most crucial insight for Self Driving Cars is that this is not the trolley problem. "I give up, stop where I am" is a valid answer.
We actually have built automation where "just give up" isn't a valid answer. CAT IIIc autoland (on a jet liner) has "Fail Active" scenarios where the machine concludes just before touch down that it no longer has confidence in its position due to one or more sudden sensor failures but the human pilots can't possibly intercede quickly enough to be safe and under IIIc conditiosn they can't see anything anyway, so, although the aeroplane will tell the human pilots that a failure occurred, it will nevertheless attempt to continue the now unsafe landing in this edge case. Most likely despite the reduced sensor validity, this is successful and everybody lives, and if not it's not as though the humans could have reacted in time anyway. But self driving isn't like that, the plane is flying, if it were to just stop flying everybody dies. In contrast a car can just stop and it's merely annoying.
The assumption is that the highly networked car has some kind of security vulnerability that allows malicious users to take control and perform acts of terror.
Since this is in software, the attacker can theoretically scale the attack to involve all cars on the road with the same vulnerabilities, which could be millions of vehicles.
This problem is not unique to self driving cars - just any highly networked car with software control of key systems, like a modern Tesla. However, a full self driving car may not manual overrides that allow the passenger to stop the vehicle.