It wasn’t nearly as cautious or timid as I expected it to be. But it also wasn’t reckless. I would describe it as assertive, but I’m sure some people would call it aggressive.
It knows the rules and what’s going on around and acts accordingly. This really should be the norm.
For the first time in a decade I’m excited about an emerging technology. The future is bright.
I would rate their driving as quite good, but their signaling of intent was terrible. More than once did I see them use turn signals, only to never actually turn. Turn the signals on, then off again, then on again, eventually turn perhaps but not always.
Humans are terrible at this too, but we also use more subtle and inadvertent signals like the slight movement to the line telling me they're likely about to switch lanes, only to see the signal go on halfway into it. Those subtle signals are easier to pick up on than a robot driving more or less perfectly straight while signaling, turning of signals, turning them back on again, and so on. Also super creepy with empty driver seats, and no one responsible to make eye contact with.
But the actual driving wasn't bad, as it turns out. Never had or saw a near miss, anything reckless or otherwise remarkable. I can see myself getting used to this tech and I would love for it to succeed, because man driving for anything but fun really sucks.
>Humans are terrible at this too, but we also use more subtle and inadvertent signals like the slight movement to the line telling me they're likely about to switch lanes, only to see the signal go on halfway into it.
Isn't waymo's behavior preferred? I'd rather see turn signals well in advance of any possible lane changes, rather than the moment before the cars actually start changing lanes. Sure, it'd be ideal if the turn signals were well in advance and there were no false positives, but road conditions are changing constantly, so by necessity you're going to be trading off between advance notice and false positives.
For what it's worth, this was explicitly taught in Northern European driving classes, especially in the context of turns. It would literally translate as "grouping" or "making a formation". You "group right" to make a right turn (think aligning text left/center/right). You can look at a line of cars and immediately know which ones are preparing to turn, even from an angle where turn signals aren't noticeable.
The Waymo driving style is almost passive-aggressive. It has superhuman situational awareness and knowledge. I wonder if it isn’t using blinkers effectively but it’s everyone else that’s technically wrong.
It’s also not perfect so who knows. There’s probably still bugs to work out.
I can understand that it's annoying that inconsistent use of turn signals is an annoyance, it isn't as bad a people that don't use them at all, but my understand is that the American road system doesn't have roundabouts like we do, which kind of require good indicator use for smooth driving.
As an example of an urban legend/facile pop science take being treated as fact [1]?
> if the richest country in the world had the common sense of simply getting rid of its car dependency and developed proper public transit infrastructure
Is there a single developed economy that doesn't make significant use of trucks and cars?
[1] https://en.wikipedia.org/wiki/Space_Pen#Uses_in_the_U.S._and...
We can look at international cities with longing and jealousy, but LA, Houston, Miami, and so on will never have useable public transit. You’d have to bulldoze the whole city and start again. So great for Paris and London and yes I would rather live there, but we are stuck finding solutions that work inside the mess we made.
Last mile is a nightmare with all public transit. With the low population density of the US we need a lot of small vehicles to feed transit hubs.
- There is nothing simple about it.
- There is nothing inevitable to this appearance of "not enough money". For example very little of self-driving cars development has been on public funds. For example, there is plenty of money in California govt. And plenty of money in US federal govt.
- One doesn't prevent the other. You can't seriously argue that better transit would remove all needs for automated driving.
Way more money is being wasted on the grifts surrounding developing public transit infrastructure. LA County for example is 88 different municipalities and it will never get resolved.
Then have the dignitaries and environmental policymakers give their uparmored gas guzzling cars up first, leading by example.
No amount of public transport can accomodate the personal whims and demands of anyone let alone everyone. Even Japan, famous for its public transport infrastructure, still has a healthy population of drivers both in metro and rural areas.
NYC and London have had 'proper' public transit for over a century. It's handy but not that great. I was on a 52 year old Bakerloo line tube train the other day and they are much like modern tube trains, if grubbier. It's not going to suddenly turn wonderful and solve everything. (typed on a London bus)
Also while I'm not sure any technology is really needed, as in we could get by without it, there are about 1 million road deaths a year globally. A 90% reduction would save 900,000 lives which is a nice to have. And more meaningful than a space pen.
It's a great technical feat, but it's not ever going to be efficient as public transportation. The form factor of the car works against it.
I tried using the train for my commute. However, I had to maintain a car at each end and I could only use the 7AM train as the parking at my origin would be filled at any later time.
Self-driving cars break my dependence on a car at destination and allow me to pick an origin train independent of parking.
The biggest issue that automatic driving needs to overcome is that it's sharing the road with manually driven cars. We've already had the technology for a long time to have perfect automatic driving if the environment was fully automatic; computers are unparalleled at accurately sharing and processing data with each other.
This isn't to say that the solution is to get rid of human drivers, because driving a car has been one of the most empowering paradigm shifts for the commons. Being able to travel yourself timely anywhere anytime for any reason is a level of power that pre-automobile commons simply did not have. Subjugating your power to travel to a computer is surrendering that incredible power.
If (and likely when) we can figure out how to better share human data with computers and vice versa computer data with humans, everyone on the road will be better off.
Ask the 2.3 million employed drivers in the United States how the future looks.
Because pretty soon, they'll all need to be doing something else. And I don't see a technology that will open up 2.3 million jobs for them to move into, or training initiatives to support them while they shift professions.
The future _could_ be bright, if we approach AI right, and build the social safety net to soften the massive amount of transition that will be necessary. But we're not. We're looking at cuts everywhere in the near future. And long term, it seems we have no strategy. Which makes it more likely that a lot of people will be harmed.
Its a fallacy we constantly use as an argument when we talk about technology taking people's jobs.
Low specialty professions have been in my view a great way to have people employed, people who may be less fortunate, unspecialized or even students. Now that such professions start getting absorbed by tech, I don't know how those people are supposed to get by.
I can see how somebody would argue 6hay this has occurred before, but for example replacing horses with cars just made postpone jump to learning how to drive. They didn't have to "specialize for 4 years" and become computer scientists. Now this is a bolder issue imo. But my perspective can be argued
Cars wiped out farriers. Should we go back to riding horses?
Gig work is a symptom. Not a triumph worthy of protection.
We have been there. The internet and then Google, Facebook were the same. Big Tech can't be trusted.
Trust, trade and mutually-beneficial coëxistence need not go hand in hand.
Waymo would say "don't worry, it sees you" and maybe the numbers even show they do some percentage more than the average human driver, but that still does not reproduce the self-help aspect of you looking at a driver until you see that that driver sees you. I think that is a pretty significant thing. It needs some sort of better answer than "don't worry about it"
Maybe they will prove to be so reliable that you don't need to worry about them. You can just reliably assume and predict their behavior as animate objects about the same as you do for inanimate objects. IE, you can't make eye contact with jersey barrier, but you don't need to, you know what it will do (nothing). And you can't make eye contact with say, a motorized railroad crossing barrier, but you don't need to. It moves, but only in an absolutely predictable way. You can be a cyclist around the railroad crossing barrier no problem.
Maybe Waymo and others will be that consistent that you can treat them like "moving inanimate objects" safely.
I don't know if they are right now. They may be better than humans on avaerage, but that is a long way from predictable and safe.
Maybe They can build in some way for the car to signal back to the surrounding people what all it has taken note of, or not. Some way for a cyclist or pedestrian to look at a Waymo and see that it sees you. That should be possible.
Maybe it can go both ways, maybe we can develop some kind of standard where if you need to get a cars attention there is some gesture you can do that it will be especially watching for. So even if it doesn't see you, it will see you if you wave your hand a certain way or something. Only problem with that is you have to be able to do the thing, and some of the times you need a car to see you the most will be the same times when you are incapacitated, ie lying on the road after some accident or something.
If you took the world as it is now and left everything the same except that every car was a waymo and there were no human-driven cars left, I would be ecstatic. Humans are horrible drivers and some of them are downright murderous. If every car were a waymo I could even imagine letting my kids bike to school in the bay (where I used to live), which I wouldn't dream of now.
But I think the second-order effects of self driving cars could be terrible. It removes any incentive not to have an incredibly long commute (exacerbating sprawl), and so far every time there has been a situation where the needs of walkers and cyclists were pitted against the needs of drivers, drivers won. I think the same will happen with self-driving cars, and people will be made to wear beacons just to walk across the street.
NotJustBikes discusses this well here https://www.youtube.com/watch?v=040ejWnFkj0
But if we have proper regulation (pretty unlikely I would say) and use things like waymo to stop humans from driving (remember, drivers are the leading killers of children in the US!) that would be great.
The roads only have so much capacity which is not going to change much whether the cars are self driven or not.
In London they have deliberately reduced road capacity to reduce traffic by blocking off lanes, side streets and the like. I could see a future where the normal way to get around is a self driving cab to the station at each end with a train for the main journey. I know you can do that now but the human cabs driven are kind of expensive.
Maybe something a bit like Zermatt where there is a train station and then it's pedestrianized apart from electric golf cart like taxis. It all works quite well really apart from being expensive. But property being like £1m+ is kind of a symptom of people wanting to be there.
If I do want to yield despite having priority I can do that just as safely without eye contact and if I want to assert priority it's objectively less dangerous (because of that minority) if I do it without eye contact. Nothing to gain, so much to lose. The reality on the ground is what the car does, and I will focus on that. Taking guesses from looking at the driver is counterproductive.
I also ride a recumbent trike which is low to the ground. A diamond frame puts you up at eye level with most drivers and that makes everything easier.
So you don't make eye contact. That's nice. But the issue isn't actually about eye contact. It's about the human having less control over their own well being.
Whether you personally ever look at a driver, it doesn't change the fact that it's a thing that a person can normally do to determine if it is a good idea to proceed or not. It is never wrong to be aware of your surroundings and look ahead into whatever you are about to do next.
If you look at a driver and that driver takes that as some sort of challenge like a monkey, well A: I've never seen that but whatever, anything is possible. B: It doesn't change anything. You still attained the goal of determining what to do. You now know to stay clear of that particular car.
With no driver, and no other form of feedback driven by you, the humans outside of the driverless car are more powerless than they already were. They are reduced to trusting and hoping.
I do believe that computers are much more likely to have attention span and sensor capacity to reliably spot pedestrians and other random wheeled traffic. The problem is "reliably", which needs to be near 100% for the pedestrian to decide to ignore that it's still in full motion. So it will be interesting to see if such notification helps or not. If we were all wearing AR glasses, a specific car could signal a specific person that it noticed and is tracking them - that THEY are safe from IT. But we are a few years from that.
I'm always surprised how few people know as pedestrians to look at me, the driver and not my car. Some don't even look at the car. I developed a habit of looking for their gaze and if they don't look back, assume they're not fully aware and just am more cautious.
This works because I, as a human, know this and can compensate when they just rush the crosswalk without being fully aware of their surroundings.
How do you do that with a machine?
So I will stare at where I know the driver's face ought to be, but I can't actually tell whether they have seen me. Tinted windows darken the inside of the car and that makes windshield glare all I can actually see.
Looking at your car is all I need anyway - I can tell if you've seen me by your behavior, you're either slowing down to yield to me or you're not. If you're not, the only possible outcome of knowing you're seeing me is being misled into stepping into your path of travel.
Humans tended to use hand gestures for that around here. Or flashing front or turn lights to disambiguate it. And the person crossing would often nod or just start moving.
AI could be even more explicit about it. It can check if you're noticing the signal too, much like the driver does.
People are conditioned to respond to crossing signals, why not give the self driving car one of those.
Cyclist here.
Are you worried Waymos are going to blow through stop signs (or red lights) AND run you over?
Or are there other situations where you're making eye contact with drivers?
It happens that drivers only check for any large moving object (because they expect cars) and don’t notice a cyclist.
Basically any time a car has to wait for me to pass, and I don’t have much distance to react if it doesn’t, I make sure the driver has seen me.
I guess it depends how they did it exactly. If it looked like an animal or cartoon character etc, that's what I would hate.
If it wasn't pretending to be an aware and caring being, that would be ok.
I wonder if the safety gains could be just due to people being creeped out around them, and behaving more cautiously. Perhaps results would be different if they carried mannequins simulating a real driver in the wheel.
This seems to be what is happening. There was a comment here by some user not too long ago that said they deliberately drive in front of the Waymo’s because they trust them to slow down.
Consider: if a non-self driving car is in an accident with a self driving car, it'll almost always be the non-self driving car at fault. And with the telemetry from the self-driving car, they can prove it too, so accidents that would have been no-fault or shared fault become fully the non-self driving cars fault. And so I think insurance for non-self driving cars gets expensive fast as there are more and more self driving cars on the road.
Insurance isn't a zero sum game, where "less insurance spent on autonomous vehicles" means the insurance companies have to make up for it somewhere else.
In a way, "car driving getting safer overall" isn't great for car insurers, because they make money financing auto risk, and if there is 50% less auto risk then they have less addressable market.
Ideally, insurance cost for self-driving cars would just be lower than insurance costs now (proportional to risk), and even insurance costs for manual drivers might go down because their risk decreases as well.
There will also be things like not having DWIs and even cheap parking (since the car can drive away and park) that'll net out for self driving. And feedback loops there- the same size police force only pulling over manual cars from a smaller and smaller pool.
If automatic driving lowers the floor of risk due to even lower risk than even the lowest risk human driver, insurance premiums will adjust to compensate and accomodate the now higher risk human driver compared to the now lower risk computer driver.
And here's the kicker: Insurance premiums don't even have to increase for the human drivers. Once owners of automatic driving cars pay even cheaper premiums, that becomes the new baseline for "cheap" car insurance and the rest should be plainly obvious. "Want lower rates? Get an automatic driving car."
If you watch those dashcam communities you see similar behavior in what feels like half of the accidents. Someone else in the wrong but boy oh boy did the camera driver do nearly nothing to prevent the accident.
And I'm sure the autonomous vehicle will have a go-very-slowly mode for navigating people's driveways and similar places the mapping cars haven't been yet.
And the nice thing about driver less cars is that they can drive wherever and whenever pretty cheaply. There's no driver to pay. Just the electricity bill for charging the vehicle and some servicing/vehicle depreciation and other fixed cost. That's a race to the bottom in terms of cost.
There's no good economical reason to limit this to just small areas. You might charge passengers a bit extra if they are further out or even for the distance the car has to drive to pick them up. But there's no good reason for that to be very expensive as it would be with a paid driver.
Why wouldn't rates go up for drivers driving outside mapped and well-driven territory?
If you rigorously enforced those limits on regular drivers and additionally imposed an interlock device to prevent DUI and remote insurance company monitoring of driving habits-- that's the comparison population I want to see. Otherwise, this is a press release disguised as an academic paper.
If you are worried about other drivers hitting you, the "all driver" comparison seems relevant. If you want to know if Waymo is a safer driver than you are, that will depend on where you are on the spectrum.
I do want to know waymo vs all population. But yes it should be compared in the same areas during the same period.
I don't see where the paper says this. Also, it compared against not only all drivers but also a subset of "latest generation HDVs."
And remember that currently, self-driving cars in California are exempt from traffic tickets : https://www.theguardian.com/technology/2024/jan/04/self-driv...
We learned that the hard way with Boeing, you cannot trust a company to self-regulate their safety standards. But with Musk in power, the legislation might encourage them to do so.
Really, computers seem more naturally suited to driving in SF than humans - from the point of view of paying attention to enough things, not having to pay attention to map navigation and having more modes of sensory input than humans.
This aint some web app deployment where flaws are annoying but thats about it.
I've just finished 2 day 1500km drive back home across Europe in various storms, heavy rain, a bit of snow, a lot of it in the dark, tons of properly dangerous idiots, weird non standard marked repairs (we talk about germany), some serious accidents along the way, few nearly misses. Our F11 BMW 5 series took it on effortlessly, complementing my and wife's skills but not interfering. I am not stellar but definitely above average driver. I've done this drive maybe 30x over past 15 years, without kids normally in one 16-18 hour push. Nr of clueless idiots is definitely rising and roads are definitely more full.
No way in hell I'll trust something fully with most precious stuff in my life until its properly battle proven by billions of miles in harsh complex situations, which what you describe it isnt. Till then there is no self driving, just steps towards it. I don't need this for driving around some short distances that much. Good luck with beta testing to ya all.
>Waymo One, is available 24 hours a day, 7 days a week in the following cities:
>Los Angeles Waymo One is available across nearly 80 square miles of LA County, including Santa Monica, Beverly Hills, and Downtown LA.
>San Francisco Waymo One is available across 55 square miles of the Bay Area...
24/7 across those areas sounds different to what you mention.
a Swiss Reinsurance Company, Ltd, Switzerland
b Zardini Lab (research affiliate), Massachusetts Institute of Technology, USA
c Autonomous Systems Laboratory (research affiliate), Stanford University, USA
d Waymo LLC, USA
e Casualty Actuarial Society, USA
I'd trust this more if Waymo had a direct pipeline to an anonymized public repository of their complete data set for independent analysis
Anecdotally, I see Waymos around town, and they are very clearly driving in a slower, safer, more cautious manner. a 92% reduction seems very plausible.
"Pedestrian involved in accident"
"Cyclist collides with car"
"Car brushes against cyclist"
"Vehicle loses control"
Amazing how responsibility shifts when a two-ton machine is involved.
A few things that pop to mind: did we exclude drivers who were not eligible to drive (under the influence, no valid driving permit, cars with issues [braking broken]...)? (This mostly matters for evaluating the average performance compared to average non-impaired human — a drunk person might mess up with self-driving vehicle sensors as well just for kicks once they are in suitable numbers and widely available)
Did Waymo drive across all the same areas and time periods as the humans did with an equal "proportion"?
Could we compare "time behind wheel" instead of miles travelled? (Eg. lots of small crashes happen when traffic is at a standstill)
There are so many ways to slice and dice this and come up with favourable results for self-driving systems that I can't but be dubious of any claim.
But notably, if we can liberate ourselves from driving through congested traffic, it would certainly reduce the incentive to speed and aim for more efficient driving as we'd be free to do other stuff while we get to our destination — and simply slowing down would buy both human and software drivers time to react and avoid most crashes (nothing really saves you from someone recklessly driving straight into your car).
One could argue here that it would be unfair to exclude the worst performing humans. Plenty of humans cause accidents for easily preventable reasons, but that's an honest part of our behavior. Or to rephrase, I think excluding the categories of people who drive terribly, but do still drive nonetheless, would be less representative of reality.
> Did Waymo drive across all the same areas and time periods as the humans did with an equal "proportion"?
No, they didn't. Their methodology didn't adjust for things like highway vs city street miles, although it did adjust for city and state.
> Could we compare "time behind wheel" instead of miles travelled?
We could, but that would incentivize slow driving rather than safe driving. You need to travel the same minimum distance from a hypothetical "Point A" to "Point B" no matter what, but the number of minutes could be inflated easily. Economic factors discourage inflating the number of miles.
> There are so many ways to slice and dice this and come up with favourable results for self-driving systems that I can't but be dubious of any claim.
I think it's fair to criticize the fact that they didn't adjust for geographic area with as much granularity as they should have (i.e. for highways vs city streets, or for excluded zones like airports). But other than that, their methodology seems solid to me.
But most notably, aggregated statistics like these focus on comparisons similar to comparing averages vs better statistical representations (like average vs median or p75 or... eg. it's possible for one driver to cause 20 collisions, and in a set of 10 drivers with no other collisions, we'd be at 2 collisions per driver instead of 0 really).
It's also not only about highway/city street miles, it's also about time of day: if Waymo is proportionally less on the streets and areas where there is more risky driving (eg. around bars and around midnight to 3am when there are also drunk pedestrians around) compared to human drivers, that would obviously skew the numbers in SDV's favour. Again, potentially they did since they acknowledge a separate study focusing on those, but later on they only talk about aggregated insurance claims.
They also acknowledge they are not accounting for accident severity, not even by using the dollar amount: while they had no fatalities, it's obviously important to weigh accidents by their severity.
With all this said, this does demonstrate current insurance liability of an average human driver compared to a Waymo SDV in Phoenix and SFO, but only once we have more comparable data should we make a bigger claim.
Doesn't Waymo only drive in the city? And damage/injury is much greater on highways? If so that pretty much makes the study worthless.
If our current system of checks and incentives is not able to keep them off the road, then you need to take them and the accidents they cause into account when comparing driving safety with an autonomous system.
Also note that the margins are too big, since only about a third of accidents involve intoxicated drivers, and the accident reduction rates from the paper are much higher than that.
I don't want to come across as harsh or accusing, but the "Methodology" section contains a lot of detail (and a more nuanced discussion could be had without disregarding it entirely :P).
So the fairer comparison would be to say that if existing SDVs were instead being driven by human drivers, we would have seen ~60 property claims more and ~40 bodily injury claims more (forgot the exact numbers from the paper), which is what, like 0.1% improvement in claims overall (certainly still a great result for the small comparative number of miles travelled, but doesn't jump off the page really).
But because the miles traveled are so far apart, nothing is really a "fair" comparison.
I don't think we can discount people who aren't fit/eligible to drive _if they still drove_. They might not have a _legal_ right to be on the road, but they are still on the road.
Do you want them to beat a drunk driver, or an average, non-impaired human? If you got into a cab and cab driver was obviously drunk, would you ride along or get out asap?
By "letting" them compare to an "average" (skewed by impaired drivers' incidents), we are letting them appear much better than an average, non-impaired driver, whereas they might be, or might not be (numbers clearly suggest they'd still beat them, but not by this much).
now THAT would be massaging statistics! Just take the best drivers, that had enough sleep, no drink, perfect car conditions... No. You have to compare with the whole world, "as it is"
This is simple with a small, experimental fleet — so we are looking at the best case scenario for these vehicles, but the question is what does it fall down to in a realistic commercial application?
That would also be comparing safety, because averages are always skewed by bad apples (i.e. 1 driver with 20 collisions gets an average to 2 for a group of 10 drivers with no other collisions). We at least need to start talking about medians, standard deviations groups and such.
And we need autonomous vehicles to beat or match good drivers, otherwise, good drivers are worse off in the streets (and due to how averages are used, this might be more than 50% of drivers). Not sure why that's so controversial?
-A man in his 20s who doesn't need a car and only uses public transport
I agree that the US needs better public transportation, but most of the US is not a city.
If the nearest grocery store is 30 miles away and the nearest bus stop is several miles away from your home, then using public transportation turns this into a half-day affair.
As it should. You’re objectively higher risk, particularly if you’ve ever been at fault for a crash or gotten a ticket.
I would expect premium levels to continue to be driven by liability pool payouts plus a profit margin. Autonomous vehicles don't increase driver risk or liability pool payouts. IF anything, they reduce them due to safer driving conditions for the humans.
How objective is that?
Which is to say that naked capitalism works until it doesn’t, and when it stops working big chances are that it won’t do it gracefully, see how present-day health insurance CEOs are now scared shitless of Big City streets.
we could have amazing and SAFE cycling infrastructure NOW if american culture wasn't so objectively trash.
What year would the EU* ban human driven cars on public roads?
That's it. Million different opinions ranging from "never" to "within a couple decades for sure", weak, strong opinions, and of course many interesting topics to branch out into.
*: sometimes I ask with US, or particular countries, it doesn't really matter for the sake of conversation.
Fun fact: people working in automotive tend to say within this century, people who like and owned many cars say never :-)
If at some point all new cars have full self-driving capabilities with widely accepted performance, then it's not impossible that after a decade of that as status quo, cars end up shipping with manual driving only being an override for specific maneuvering, and eventually having regular licenses lose rights to manual driving on public roads.
Just consider vintage cars: These are vastly more dangerous for both occupants and pedestrians/cyclists than existing cars and also pollute more. They have not been banned in the last century and no ban seems to be coming anytime soon.
If you are not gonna ban vintage cars, why would you ever ban manual driving (which is much less harmful in multiple ways)?
Once adoption picks up, a ban becomes useless anyway because the fraction of problematic cars decreases (and no longer really matters), while any decisive legislative action towards bans gets tons of pushback from enthusiasts (and no real political gain, because the average voter does not give a shit about single digit traffic accident rate reduction anyway).
It's just saying something without any information.
Here in Italy insurance is 20% more expensive at the very least and the gap is widening if you don't put a tracking device on your car which checks position, speed, etc.
I have a hard time justifying why one would not want it (besides privacy), all people that complain about it are people that regularly drive above the speed limit or pull dangerous overtaking maneuvers.
(all of these regularly happen all in the same place)
No, not more than you currently pay, because you haven't become more at risk of collision than you are now.
I guess this day has arrived for us.
Definitely sooner than I expected.
But my question was about being "one of the major players".
The market is huge. My rough guess is that every 10% of market share is worth $1T in market cap. Which is half of Google's overall current market cap.
https://www.reuters.com/business/autos-transportation/trump-...
Last I heard Waymo, they have to do every city, one at a time – and there are a lot of cities.
It just doesn't seem like a huge problem? If you have cars equipped with high resolution Lidar and cameras, mapping new routes seems like it ought to scale as the fleet is expanded.
This problem seems embarrassingly parallel.
Google hired a bunch of people who'd done well in the 2005 DARPA Grand Challenge [1] - including Sebastian Thrun and Chris Urmson who lead the winning team.
Thrun is also behind Google Street View which in some regards [2] looks a lot like a self-driving-car sensor suite. So Google was having LIDAR-equipped, high-precision-GPS-equipped cars drive every street in every prosperous country, starting back in 2007. Uber wasn't even founded until 2009.
Other Google hires had a similar background - such as Anthony Levandowski who competed in the DARPA Grand Challenge with an autonomous motorcycle. He later gained fame after being caught stealing a bunch of LIDAR schematics and similar trade secrets while leaving Google for Uber.
We also know from court documents that Google was throwing around mountains of cash, even when the self-driving-car division had no revenue. Waymo was set up as an "internal startup" giving employees "equity" so Levandowski left not just with internal documents but also with over $100 million.
That's a stark contrast to a lot of other players who'd need to show investors a lot more to get a lot less. This endless money was undoubtedly helpful in giving them the confidence to design for L5 autonomy from the start, no need to design a lesser system to get the money coming in early. And of course if you can pay $100 million for one guy, you're not going to baulk at the cost of a few $10k LIDARs so long as the people making them claim the price will fall to $200 for automotive quantities.
The 2005 Grand Challenge simplified the driving problem a great deal - no pedestrians or moving vehicles to deal with, safe and driveable route guaranteed to exist - but it did a lot to focus development efforts.
[1] https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2005) [2] https://schlaff.com/wp/the-secret-to-googles-self-driving-ca...
L4. Waymo can't "drive everywhere in all conditions" [1]. (Nobody can. Not to L4 standards of never requiring "you to take over driving.")
Waymo existed for like a decade on the basis of "we have a pile of money and our founders think it's cool". They moved slow and broke nothing. They made slow, incremental progress exploring the self driving space for years with plenty of funding and no expectation of a product launch.
So when a bunch of other companies came along, years and years later, and were all "we're going to have self driving cars next year!!!", and Waymo had their "oh shit, we better actually make this into a product or why do we even exist " moment, they were already in a really good place. They weren't rushed, they probably could have pivoted to a product a year or three earlier. They'd already solved the problems everyone else was just handwaving.
If it was accidental, they sure got lucky in the amount of time they were given with relatively little pressure, and in the timing of that competitive pressure at the end.
Waymo (January 2009) is older than Uber (March 2009) and has always been part of Google/Alphabet. So to answer your question they did it with enormous amounts of time, money, and talent.
* Time - Waymo started work on this in 2009 and are now getting a faster adoption in 2024. Uber started in 2015 and sold it off by 2020. Time has given Waymo the advantage of slowly ramping up and starting with toe-dips.
* Funding - Waymo has spent more money to get where they are now (probably 2-3x what Uber spent in total). Google also has deeper pockets than Uber, which means that there is less pressure on quickly ramping up Go To Market or immediately getting profitability.
* Culture - Waymo was much more cautious (likely because of funding structure) which is now paying dividends in terms of regulatory approval and consumer trust ('sometimes you have to go slow to go fast').
With Cruise gone now there's basically just Waymo, Zoox, and Tesla (bit controversial) that are the names thrown around when talking about self driving market share, and out of those only Waymo has a functional service.
We know that humans under influence are really bad drivers. So bad, that DUI is illegal. Stats commonly include accidents caused these drivers.
I’m joking of course, but it was the first thing I thought when I read the title.
only the claims associated with vehicles registered to addresses (i.e., where the insured resides) within Waymo's operating zip codes in San Francisco, the Phoenix metropolitan region, Los Angeles, and Austin were included
I get that some administrative oversight in always going to be needed, but to what extent is the AI "Actually Indians" at present?
I have never been responsible for an accident and was never implied in a serious one with body injuries. Maybe I'm already avoiding >95% of the accidents by not driving drunk, not texting, paying attention, not speeding & basically following the rules?
People might go sovietic saying Waymo is better overall, so let's force it on everyone. Maybe we could achieve an even better result by being stricter on applying the rules and leveraging far less complicated technologies to prevent car from going over limit, preventing it to start when driver is drunk...
I wonder if this will ever make it to Canada's winter wonderland where there is ice and snow on the road for 4 months of the year (more or less in other areas.)
The hard part about self-driving is dealing with / predicting the behaviour of humans. That's something that humans are good at and robots are bad at. And SF has more weird human behaviour than anywhere in Canada.
Predicting the behaviour of vehicles in snow is something computers are much better at than humans. Solving the sensor problem for driving in rain/snow also seems like a standard tech problem where vehicles can quickly become much better at than humans who are limited to mark 1 eyeball.
Clearly Waymo can afford to self insure, but its difficult to accou t for the underspend and overspend depending on accident rate.
Insurance companies have the advantage of amortization over a much larger number of policies. Even insurance companies will have their own insurance.
“The benchmark was calibrated using both mileage (driving exposure) and residence zip-code (geographic region). Specifically, only the claims associated with vehicles registered to addresses (i.e., where the insured resides) within Waymo’s operating zip codes in San Francisco, the Phoenix metropolitan region, Los Angeles, and Austin were included.” Page 11
“Waymo also almost exclusively operated on surface streets (non-access-controlled freeways) with a unique distribution of driving that is representative of a ride-hailing fleet. In contrast, the benchmark represents the privately insured driver population that resides in these geographic regions. The associated benchmark mileage has more freeway driving than the Waymo ADS.” Page 26
“Including freeway driving makes this benchmark crash rate artificially lower, so, by including freeways in this study’s benchmark, the benchmark crash rate underestimates the true driving risk of where the Waymo ADS operates.” Page 26
“Four claim frequencies were independently calculated for vehicles within Waymo’s operating zip codes in San Francisco, Phoenix, Los Angeles, and Austin. Thereafter, these frequencies were proportionally weighted according to Waymo’s mileage distribution across these four regions. This resulted in the definition of a benchmark driving population which had driven the same driving distribution across geographic regions as the Waymo Driver.” Page 12
“Waymo also almost exclusively operated on surface streets with a unique distribution of driving that is representative of a ride-hailing fleet… Due to all three of these limitations being expected to artificially suppress the benchmark crash rate (underestimation), the benchmarking results in this study are considered to be conservative.” Page 26
1) Not filtering out drunk drivers is downright misleading. I am less interested in "is mandatory self-driving safer than human drivers?" than "is mandatory self-driving safer than mandatory breathalyzer ignition?" There are some uniquely human downsides which are fair comparisons to AI - distractions, slowness, anger -but drunk drivers should be removed from this.
2) The bigger problem is people rationally speeding or running red lights. Waymo is strictly legal, but Tesla self-driving historically encouraged speeding, rolling stop signs, etc. Ubers and Lyft are preferred if you're in a hurry precisely because human drivers bend the law. Human speeding is not caused by a lack of intelligence and needing an AI to figure it out that it's dangerous: humans do it because they are selfish and reckless, and someone like Tesla will make a driving AI that fulfills their selfish demands. There is absolutely no reason to think that a speeding Waymo would be safer than a speeding human.
Studies like this makes me worry about a medium-term where driving is more dangerous for everyone: lawful drivers get in more accidents because they're using sub-human AI, whereas nothing changes for unlawful drivers because they are using dangerous AI, turning off the AI and driving drunk, etc.
If I were to hide something fishy, I’d define the shared coverage to include zip codes with fewer human accidents:
* zip codes are relatively small, so you have a lot of variance per code due to randomness of rare events;
* Waymo mostly covers urban areas and nearby suburban areas, so you can claim suburban zones were only recently covered by Waymo or that the insurance had too few members in certain zip codes to include them.
I don’t think that study is wrong, but it is in Waymo’s interest to claim they have few accidents and in insurers’ interest to claim there have many, or rather more, accidents to raise their premiums.
Except of course if Waymo would crash a lot more in freeways, which we can't assume won't happen just because human drivers don't. Far from this making human drivers look better it makes the comparison unequal and for that reason, uninformative.
Half of those apples are orange and smell like citrus fruits.
I'd want to compare to our best drivers (e.g. ambulance drivers), not average drivers.
I mean when you are literally removing 1 person from possible injury, you literally can't have the same number of bodily injury claims. It can only be less.
>The garaging zip code of the insured vehicle was used as a proxy for the city (Phoenix, San Francisco, Los Angeles, Austin) in which the vehicle drives. Waymo also almost exclusively operated on surface streets (non access-controlled freeways) with a unique distribution of driving that is representative of a ride-hailing fleet. In contrast, the benchmark represents the privately insured driver population that resides in these geographic regions. The associated benchmark mileage has more freeway driving than the Waymo ADS. There are several considerations when examining these results with respect to this limitation. First, freeway driving has a lower crash rate (Scanlon et al., 2024a). Including freeway driving makes this benchmark crash rate artificially lower, so, by including freeways in this study’s benchmark, the benchmark crash rate underestimates the true driving risk of where the Waymo ADS operates. Second, driving outside of these denser urban areas that the Waymo ADS operates would likely represent a reduction in overall relative crash risk. For example, commuters from the city would likely experience a reduced crash risk as they travel to less densely populated areas (Chen et al., 2024). Previous studies have shown that most injury collisions occur within a small radius from residency, and that American drivers rarely travel far from their place of residence, with approximately 80% of one-way household trips being less than 10 miles (DOE, 2022). Third, the benchmark drivers garaged in the Waymo deployment area are not operating with the same distribution of mileage within the geographical limits as the Waymo ADS. Chen et al. (2024) explored the effect of Waymo’s driving distribution on benchmark crash risk and found that - should the benchmark driving distribution match Waymo’s in San Francisco, Phoenix, and Los Angeles - the benchmark police-reported crash rates would have been between 14% and 38% higher. Due to all three of these limitations being expected to artificially suppress the benchmark crash rate (underestimation), the benchmarking results in this study are considered to be conservative. Surely, there is an opportunity in future work to leverage new data, such as insurance telematics, to more precisely define and leverage the benchmark driving exposure data to better account for this potential confounder.
It's not just a mere "reduction" as the percentages imply. The number of injuries is almost zero.
https://waymo.com/blog/2024/01/from-surface-streets-to-freew...
if you put even teenage drivers on the waymo covered routes and compare with average it would probably have the same drop.
Some fun predictions, gathered from other comments + my own:
- You ll have to dispute a dead family member against a faceless company that conveniently deleted crucial logs.
- They will make you wear wireless tags every time you cross a street, and fine & shame you, if you don't.
-Tags will contain your real identity, so that you ll get a bonus for your medical insurance.
- Insurance companies will refuse to insure your ild mechanical car, and will force you to buy a subscription for the new shiny car, that will have to be replaced every N years.
- The new car will refuse to drive you at X location because you overused your monthly C02 quota.
On the contrary, of all my negative experiences biking in NYC over 10 years, the _only_ time I was able to hold a driver accountable was when they were riding a Revel scooter in which case the "faceless company" took them off their platform. This comment seems to significantly overestimate how easy it to hold human drivers accountable.
https://www.sfchronicle.com/bayarea/article/3-month-old-infa...
Also, it’s certainly better than having everyone drive themselves home after drinks at a bar?
What does this have to do with self-driving cars? What type of "crucial logs" do today's car even keep?
>- They will make you wear wireless tags every time you cross a street, and fine & shame you, if you don't.
>-Tags will contain your real identity, so that you ll get a bonus for your medical insurance.
1. people already carry "wireless tags" everywhere they go. It's called their phone. They don't seem to mind.
2. Why would they do this, when they spent billions into developing lidar? If anything mandating this is bad for them, because it weakens their moat against incumbents, who can develop self-driving systems that are cheaper because they don't need expensive lidar technology to avoid pedestrians.
>- Insurance companies will refuse to insure your ild mechanical car, and will force you to buy a subscription for the new shiny car, that will have to be replaced every N years.
1. Insurance companies can theoretically do this today. Why aren't they doing it? Why would they suddenly do it with self driving cars?
2. All of this seems... fine? If you're driving a deathtrap that's a hazard to you and others, and the insurance company is on the hook for the millions of damage you can inflict (if you severely injure someone), they should have the right to refuse coverage. I doubt they'll actually refuse insuring such cars though, it'll just be really expensive.
>- The new car will refuse to drive you at X location because you overused your monthly C02 quota.
Pure speculation and fearmongering.