Don’t even get me started on the “didn’t take psych 102: Attention and Memory”-level cluelessness required to believe a human can safely pay attention well enough in a vehicle that reliably tricks you into believing it’s autonomous to take over in the split seconds before a disaster…
I find it hard to believe that the Tesla and Auto Manufacturer positions aren’t knowingly deceptive. I mean, what are they going to say? “It’s too hard so we’re just waiting for Waymo or Cruise to license their tech once it works”?
I’m gonna stop here before I start mocking geohot… I seriously can’t believe the journalists who wrote those early stories were willing to risk their lives like that…
The auto manufacturer approach is also showing progress. In CA and NV you can buy and operate a Mercedes with Drive Pilot, which is Level 3 certified. In the right (very restrictive conditions which essentially come down to "sitting in highway traffic on your commute") you legally do not have to pay attention to the road and can read/watch/work/etc.
I'd personally never trust an autopilot unless it's either backed by human-level AI which has also had years of driving experience, or it's in some very highly constrained environment (maybe airport bus going from gate to plane). Out on a highway or public road system is the most unpredictable environment possible.
This is about the peak of what you can get with automated lane keeping and braking. I don't see any route from this point to anything like level 4.
How do you plan to do that? Will you wrestle the code away from Waymo? Or do you plan to put in the long years of thousands of man hours to develop it and all the costs of the hardware while you do it?
Sadly its self-ownership is only "according to legend" rather than anything battle-tested.
Won't it be great once we have fully self-driving cars? Heck, I could buy a car and then rent it out to other people like a taxi when I'm not using it, and it would pay for itself. Maybe I could even make a profit!
...
If I could make more money than the car costs to purchase and maintain, without any additional work on my part, why would the company sell me the car at that price in the first place rather than just running the taxi service themselves and keeping all of that extra profit?
https://www.forbes.com/sites/cyrusfarivar/2023/12/04/judge-a...
Not sure if you count this as "legit" or not, but I haven't seen similar incidents from Waymo. (Perhaps I've just missed them - if so, links welcome!)
The question of Camera vs LIDAR+Camera is a narrow technical question about how to construct a 3D scene. That's it. It says nothing about making sense of this 3D world for which you you have a 3D point cloud and it says nothing about how to actually navigate that world. Say you're driving down the road and there's a bit of construction, there's a guy holding SLOW/STOP sign directing traffic. LIDAR will tell you it's a hexagonal sign, but it can't tell you what it says, you need a camera to read the sign and tell you what it says. It doesn't tell you how to drive, how fast you should go, how much space to give the guy with the sign etc. Everything AV-related which is not constructing a 3D scene is actually the same across all AV stacks, which includes the hardest part - the actual driving itself.
Your example of needing to read a stop sign isn't a great example. At least in North America, a hexagonal sign is always a stop sign. A better example of your point would be a speed limit sign.
But Waymo never said you don't need cameras. Hell, they have 29 cameras in each vehicle compared to Tesla's 8.
Your point about their approaches being more alike than different is somewhat true, but you wrongly attribute the LiDAR vs camera debate to Waymo marketing. It's Elon and Tesla fans who started it and incessantly repeat it even to this day. Most rational folks say use whatever you can to get it working (which Waymo did) and optimize later.
There are atmospheric conditions and obstructions that lidar can see through that cameras can't.
Cameras also seem prone to being blocked by a small splash of mud/dirt. Is anyone on this thread knowledgeable enough in the domain to know if that's an issue? I thought of it while moving my head sideways to see around a temporary sight obstruction on my windshield. Luckily the windshield is big, and I can move my head. Cameras are small. I guess you just put several so you have an effectively large camera array? It does mean more redundancy is necessary than I would have initially thought.
if you want to drive across the ultra straight highway flyover states it's game changing. if you don't do that, it's not that useful.
I have watched enough recent Tesla self-driving ride along videos on YouTube to suspect you might be mistaken on this point. Tesla intends to launch a cybertaxi fleet and their software looks like it will be good enough to get them there without lidar or additional sensors.
I just watched the latest video from AIDRIVR on YouTube. AIDRIVR is a TSLA pumper-and-dumper who has dedicated their channel to uncritical praise of FSD. In the first third of the video FSD v12 runs two stop signs, once directly into oncoming traffic in a 1-way traffic control and once at a stop where the cross traffic does not stop. This stuff is not even a little bit ready for fully supervised operation. https://youtu.be/fpoXr_z_6a4?t=565
It is one thing to cherry-pick flawless drives on a sunny day and upload it to YouTube while having someone behind the wheel ready to take over the glorified driving assistant system. It is another to run a commercial driverless service open to the public 24/7 in one of the biggest urban areas, knowing that riders will record everything, assuming accident liability, and keeping a nice safety record without someone behind the wheel.
I have a comma.ai in our minivan and it works great. Much better than Honda's built in lane following tech
https://www.teslafsdtracker.com puts miles to disengagement at 30 and miles to critical disengagement at 300 for all v12.x.y versions. Note: this is crowdsourced data and the users themselves get to decide what's critical and what's not.
As far as numbers required to make it fully self driving, it's at least 3 orders of magnitude worse than the big players. Waymo and Cruise routinely had 30,000+ miles per disengagement during their California testing. That's one disengagement for roughly 3 years of driving.
Well, being as how people get to pick and choose when they use it, and that the driver has to remain vigilant at all times, I'm not surprised.
But this is easy to test: stick random people in the car and go to random locations with FSD, see how it works. Why haven't they demonstrated this yet?
I am very skeptical of the "weeks without intervention". It's cool technology, but I never had a single trip where I didn't need to intervene at least once.
It would regularly blow through school zones, failing to read the posted sign.
On a couple of occasions it veered off the road on to the shoulder.
My thinking is the car will never be level 4. It doesn't have sufficient sensors or NN compute power.
Tesla already silently abandoned the "just over the AIR one day" approach with a dedicated car announcement.
However the camera+ultra-sonic radars but no lidar is not only Tesla vision, but other companies too.
We don't know what it costs Waymo to operate their car. The fact that they charge money doesn't make them a real business, just as people paying for FSD doesn't make it a real business.
Both are promises until a breakthrough occurs. Waymo is starting small-scale but for a full setup, even if guided by humans here and there. Tesla starts with millions of cars and multiple countries but with far modest functionality.
Waymo is scaling up; Tesla FSD finally starts to look like the promise, with a high chance of a ride with 0 disengagements still on the scale of many countries and launching it also on a different continent right now.
It's interesting to observe how companies with radically different approaches are about to arrive at the same goal almost simultaneously.
https://arstechnica.com/cars/2024/05/on-self-driving-waymo-i...
Humans suck at driving: https://jakeseliger.com/2019/12/16/maybe-cars-are-just-reall...
Waymos avoid many of the Uber challenges: foul-smelling "air fresheners," dubious music / talk radio choices, etc.
Waymo sometimes does weird, unexpected things - but safely. Once it seemed to change its mind about the optimal route a few times over the course of 10 seconds, switching safely between two lanes back and forth a few times before committing. It used its turn signal fine, and the lanes were clear, so it wasn't a problem, but this isn't something humans do.
Sometimes it behaves oddly, but I have developed confidence that it will do those odd things safely.
Oh, I disagree, this is something I observe and in fact do myself quite a lot. We all run it through our minds which route might be the quickest spending on certain factors. The difference is Waymo (or any tech) will base this on actual data (i.e., getting there quicker) vs humans who will be more emotionally driven (i.e., frustration at the driver in front, wanting to take the more scenic route, being undecided about stopping at that cafe halfway).
I'm all for self driving in highly populated areas. In a perfect world I'd like to see it integrated into all vehicles, and when entering specific areas you are told your car will enter self-driving mode. Arguably this makes the most business sense for Waymo, licence the underlying tech to manufacturers that already have capacity to produce vehicles vs compete.
As far as humans suck at driving, it's not that they suck on average, but that the ones who do suck at it don't always have a sticker saying that they suck.
No one with kids wants to ride in taxis with kids all the time. Ditto for anyone with hobbies that require transporting large things, like kayaks, bikes, etc. Or people with large pets. Or grocery shopping for more than 1-2 people. Or any of the dozens of other conveniences that Americans have come to expect from owning a car over the past century.
I can take my time to get car seats in and kids buckled, without feeling the pressure to hurry from the human driver.
I don't have to feel like my kids misbehaving are going to annoy a human driver, or get me a bad review in Uber/Lyft.
I don't have to worry about tipping, or the driver taking a longer route to charge me more.
I don't have to worry about small-talk, or awkwardly sitting in silence when I normally would be talking with those I'm driving with.
Obviously this doesn't cover all use cases for a car (pretty sure you can't load a kayak onto a Waymo because you'd block sensors), but it seems WAY better to me as someone who doesn't like to deal with the people aspect of Taxis.
You mention American at the end of your comment, but the rest of the world isn’t the same. Waymo doesn’t really have to limit itself to the states once they get the concept worked out.
Americans have become emotionally attached to cars because of what they enable them to do. That might take a while to die. But in Europe cars are more of a pita to own and run because we have less space. I don’t have any great love for mine. As soon as waymo gets here and is reasonably priced I’ll get rid of my car.
People in the Netherlands get fine without a car: kids just bike to school with their friends instead of sitting in the backseat in traffic for 45m every morning. This is because money and space is not spent exclusively in car infrastructure, but cycling and walking and public transport.
> Just because they're driven by computers now isn't going to magically change all the reasons that people didn't use them before (hint: it wasn't because they were driven by humans).
Sort of. The primary reason I don't hire vehicles more often is cost, which is related to the human driver. The wealthiest families I know are much more likely to use a car service to ferry family members around.
If there was a car service that could whisk us to school, work, grocery shopping, etc with no more than 15 minutes advanced notice for less than the cumulative cost of a similarly-sized private vehicle I'd sell one of our cars in a heartbeat. I have no idea whether that future is years or decades away, but when it occurs many families I know would go from 2 or 3 cars down to 1.
I'll admit that going from 1 car to 0 cars would be a tougher sell. For that I'd have to be confident in five nines of availability and vehicles that can haul equipment like bikes and kayaks. But that doesn't seem like an insurmountable problem, just a logistical one that'll take a bit longer.
The driving experience itself is on par with the "best" drivers I've ever ridden with (things like stopping at actual stop signs, for instance, and not racing from one traffic light to the next, and being courteous to bikes and pedestrians), not to mention just the peace and tranquility of being in a car solo when you're not having to drive (I know, I know, mass transit is better for countless reasons and this is actually doubling down on human isolation which is probably not great long term). Anyway, I have zero interest in getting into an Uber at this point. I'd wait longer and pay more for a Waymo if given the choice. And I'm fully aware people will, if this works more broadly, lose jobs bc of it. I'm not insensitive to that, but I don't think the genie is going back in the bottle barring catastrophic incidents by Waymo et al that cause regulators to kill self-driving cars altogether. Note that I did witness an incident where on a road with no lane markings the Waymo straddled a left turn "lane" and a straight-travel lane. It's an intersection I transit often and normal drivers have great trouble with and frankly makes me uneasy every time I turn left there as well. The Waymo was definitely perplexed by it.
For those who talk about how Phoenix's roads are straight and wide... This is not true in Los Angeles (nor in SF though SF is more of a compact grid than LA). For those of you unfamiliar, a lot of the streets in LA where Waymo operates today are very narrow, with cars parked on both sides and so there's inadequate room for two cars to go down them without waiting for another car to pass. These same streets have zero lane markings on them. I've experienced this several times in Waymo to date where the car just "gets it," though it's almost too cautious when it needs to get over to let another car pass when there's not enough space for both. And if you read all of that and say "what about the weather?" It's obviously an issue and I fully agree it will delay the rollout "everywhere."
All that said, I cannot wait until I can jump in one of these things, from Waymo or any other company, and safely go up to the mountains or some other road-trip destination. The economics of longer trips, particularly to rural areas, are likely tricky bc of the inability to count on a return fare, but, man, I do think self-driving cars are a radically important technology that will vastly change how we transit and, really, how we live. That is, if they don't fuck up too much en route to getting there.
I agree with all of your points about Waymo vs. Uber-like ridesharing—the average Uber ride is so much less safe that it’s hard to argue for.
But I also agree with your aside about the growing isolation of society—the longer term implications of every event, meal, and errand being separated by autonomous journeys are staggering.
So the question is, how do the societal isolation factors play into your decision making? (Honest question, not a gotcha, I’m curious how others think about these tradeoffs.)
I live in a big city (larger population than Phoenix) in the Uk and I've never even seen a self-driving car. Anywhere. I don't even think such a thing exists on public roads in my country. That Gibson quote about the future not being evenly distributed, etc.
Just a data-point.
You won't know if people have Level 3 "Self Driving" cars because unlike Level 4, the Level 3 cars always have a human sat in the driving seat, it's just that maybe the human isn't paying attention and maybe the car is driving anyway. It may be difficult to gauge (beyond guessing) how many people you see are bad drivers and how many aren't actually driving at all under L3...
L1 (the machine does some of the work but a human driver is always doing much of the driving) is certainly something you see and don't even think about. Intelligent Cruise control (ie it won't smack into the car ahead but instead slow down) on a motorway, maybe automatic lane keeping on somebody's fancier or newer car, it's not "Self driving" as you'd understand it, but it's something.
The way these "Levels" work is L3 to L4 is the point where we transition from "The human is legally driving but the machine is offering more and more assistance" to "The machine is legally driving and the human is asked less and less often to do anything at all". As a result a person who is literally blind and thus couldn't possible drive the car or obtain a license to do so - can (and they do) use a Waymo, just like they'd use an Uber, but they cannot do the same with Tesla "Full Self Driving".
That’s an interesting way of saying you live in London ;)
(Phoenix urban area is more populous than every urban area in the UK except for London)
Countries develop at different rates on different things.
A significant portion of traffic deaths also occur in special conditions-- at night, with intoxicated persons, in bad weather.
Existing self driving cars won't even drive in those more difficult conditions.
In terms of the passenger miles driven if you compare to non-intoxicate humans the expected number of deaths for self driving cars is still below 1 if they were as safe as non-intoxicated human drivers.
Safer cars are an excellent goal but they're not automatically a given result for self driving.
> Waymos avoid many of the Uber challenges: foul-smelling "air fresheners," dubious music / talk radio choices, etc.
And introduces new ones like being dropped off blocks from your destination because the car refuses to drive on perfectly fine roads, service being unavailable in poor weather, and extending Google's tracking of everything you do online to offline.
:D
Aside, you can just ask uber drivers to turn off the radio.
To put some numbers on it in the US cars are driven about 3.2 x 10^12 miles per year, and around 4 x 10^4 people are killed in car accidents (drivers, passengers, pedestrians, and cyclists).
That's one death per 8 x 10^7 miles.
There are around 2 x 10^6 people non-fatally injured in car accidents per year in the US. That's an injury every 1.6 x 10^6 miles.
There are around 4 x 10^6 non-injury car accidents per year in the US, which is one every 8 x 10^5 miles.
If we assume all miles driving are equally risky and that we drive 40 miles per day 365 days a year, then we would expect to be in a non-injury car accident around once every 55 years, be injured in a car accident around once every 110 years, and be killed in a car accident around once every 5500 years.
Of course almost no one drives all their miles at times and in conditions when the risk per mile is average so when estimating your personal risk you need to take that into account.
Phoenix has the perfect climate for self-driving cars.
It will require a major technological leap in order for them to succeed in the "real world" (fog, rain, snow, etc).
We handle both dense fog and heavy rain on the latest vehicles. The best blog post is probably https://waymo.com/blog/2021/11/a-fog-blog/ but you can find a lot of videos in the rain.
Snow and very cold weather is a challenge for sensor cleaning. We've done some testing in both NYC and Buffalo (https://waymo.com/blog/2023/11/road-trip-how-our-cross-count...) to collect data.
I'm not sure whether this reflects their own preferences, what they think customers want, or if they are just completely oblivious.
Knowing BigCo reputation, I think it’s equally possible that Waymo and/or BigCo accounts will be banned for actual perp, complainant or random rider in-between… what a world…
If they are doing 50k rides a day, then they would appear to have a remarkable safety record.
It will be interesting to see if these investigations lead to a repeat of the Cruise debacle or if this will become the price of doing business.
[0] https://www.reuters.com/business/autos-transportation/us-saf...
> The car went onto a freeway, where it travelled past an on-ramp. According to people with knowledge of events that day, the Prius accidentally boxed in another vehicle, a Camry. A human driver could easily have handled the situation by slowing down and letting the Camry merge into traffic, but Google’s software wasn’t prepared for this scenario. The cars continued speeding down the freeway side by side. The Camry’s driver jerked his car onto the right shoulder. Then, apparently trying to avoid a guardrail, he veered to the left; the Camry pinwheeled across the freeway and into the median. Levandowski, who was acting as the safety driver, swerved hard to avoid colliding with the Camry, causing Taylor to injure his spine so severely that he eventually required multiple surgeries.
> Levandowski and Taylor didn’t know how badly damaged the Camry was. They didn’t go back to check on the other driver or to see if anyone else had been hurt. Neither they nor other Google executives made inquiries with the authorities. The police were not informed that a self-driving algorithm had contributed to the accident.
> According to former Google executives, in Project Chauffeur’s early years there were more than a dozen accidents, at least three of which were serious. One of Google’s first test cars, nicknamed kitt, was rear-ended by a pickup truck after it braked suddenly, because it couldn’t distinguish between a yellow and a red traffic light. Two of the Google employees who were in the car later sought medical treatment.
It was a long time ago, but Larry Page was well aware of it, and imagine if that incident received fair coverage and investigation.
I recognize accident lawyer work when I see one :) They charged Waymo’s insurance to the max.
Self driving buses will be such a boon for public transportation. Now you can have 24 hour buses, that operate on holidays as well, or even dynamic, short term routes based on demand (eg: after a concert or sports event), without being dependent on the availability of pre-allocated human drivers.
Electrical autonomous vehicles don't have a need for a driver and electricity is relatively cheap. So you don't get much economies of scale by making them bigger. Most city journeys would be under a kwh. Even at current grid pricing that's cheap.
Eventually, cheap autonomous vehicles could be mass produced at low cost and would have very low operational cost. So the ride cost would be comparable to, or lower than, current public transport options.
Trains (and train-like options such as metros) are vastly more efficient than cars in number of people moved per unit of time per area used. That might not be a big deal in suburbia, but in dense inner cities it's one of the most important drivers of public transport.
If you are not paying for the conductor, can’t you make trains much more appealing? They could run every five minutes, and last mile can be solved with autonomous car that is waiting for you when you arrive.
AVs give us a path toward a world where very few people need to own their own car. We can put all those parking spaces to better use. We can improve equity by giving more people access to safe, reliable, affordable, and convenient point-to-point transportation. Being able to consistently get a ride to where you need to go is something we consistently under-appreciate. It means being able to get a better paying job on the other side of town. Or not having to worry about missing a dialysis appointment, or a meeting with your parole officer or therapist. When the marginal cost of a robotaxi/robobus ride is close to zero is when the AI economic boom will really begin.
Interestingly, no one ever argued for the profitability of cars, so all we can do now is to calculate the overall economic costs and societal benefits and that's where public transport clearly and easily wins.
But the impact of taxis on road traffic in a dense city is comparable to the impact of private cars - perhaps even more so as they're often travelling empty between rides. If every journey which was previously done with a car is done with a taxi, there's no reduction in vehicle traffic - meaning the same problems of congestion and pedestrian safety.
Driverless cars can probably drive closer on highways to increase throughput, but that doesn't really help in cities or residential areas. Ultimately if lots of people shift to driverless taxis to get around, there will be far more vehicles on our streets.
> But even the most bullish believers in autonomous transportation acknowledge the tech still has a ways to go before it’s reliable enough for widespread deployment on U.S. roads.
I will give tempe/scottsdale credit though - they have their roads around the major tourist hubs in GREAT shape - the lines crisp and the lights bright and new - I think it makes it much easier for a waymo to get around.
Waymo is a real business serving 50,000 rides each week delivering paying customers to their destination. If you haven't tried it yet, the product is amazing. Private, doesn't cancel, safe, and smooth. I will never take Uber again if I have the choice.
How much money is Waymo bleeding every quarter? Maybe the investors don’t care, but it’s relevant if you want to call it a real business.
The biggest (only?) complaint I had is that it would not pickup/dropoff at the curb at our hotel. So if it was raining, we'd have had to walk out in the rain to meet the car in a parking spot.
The real business is an entire transit system, with purpose-built vehicles of various sizes, centralized routing, etc.
Has it though? They've come an impressively long way to have 50,000 rides a week, but that needs to increase a thousand fold to justify the $6B of venture capital and $30B valuation. That's a lot of cars and a lot more work than it takes Uber to bring on another underpaid owner driver (Uber has 23 million rides per day)
The other common mode is secondary to an original crash. Vehicles either are pushed into different roadways, over abutments, or down hills, which causes the vehicle to roll or otherwise crash into pedestrian areas without warning. This is most common in winter conditions.
I've heard they do 50,000 rides per week in SF, LA, Phoenix combined.
Assuming they make $20/ride, that's still $1M/week, or $52M/year. I'm sure they spend in Billions/year.
They would have to scale out to every major city in America and add another 10000 cars before they can turn a profit.
> Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.
Of course that was in 2016 and as far as I’m aware we are still awaiting those details.
Also, I doubt there is a point in you "renting" your Tesla. Tesla the company has enough money to flood the road with their vehicles, and your vehicle is irrelevant. Have you heard any individual renting their personal Camry to a taxi company or a Uber driver?
If yes, perhaps cities with fewer cars can skip the taxi step and go straight to smart buses.
No need to wait for autonomy. They have (had?) such a service in Cairo, but unlicensed jitney vans were already common there. They never launched it elsewhere.
BYD recently announced that they will not be using BYD's technology. Not good enough for production cars.
Waymo still has rather bulky rotating LIDAR scanners. That technology needs to shrink more before wide deployment. A few years ago, there were lots of LIDAR startups, but few LIDAR buyers, so that industry collapsed.
No, there isn't. We do have a team of folks to support situations where we aren't confident and choose to "phone a friend". This recent blog post covers some of it in more detail:
https://waymo.com/blog/2024/05/fleet-response/
Most importantly: at no point does someone remotely "drive" the vehicle. They can direct it to say "hey make a u-turn and go to this new point", but they aren't remotely driving.
This doesn't line up with other statements made by Waymo though.
That blog post as an example:
> The Waymo Driver does not rely solely on the inputs it receives from the fleet response agent and it is in control of the vehicle at all times.
Yet in an incident in January when a Waymo ran a red light and caused a moped to crash the narrative is
> In January, an incident took place where a Waymo robotaxi incorrectly went through a red light due to an incorrect command from a remote operator, as reported by Waymo.
So I'm curious, is it a case of "The Waymo Driver doesn't always follow road rules itself." or "Remote Ops can make a car run a red light against the Waymo Driver's programming."
[0]: https://www.forbes.com/sites/bradtempleton/2024/03/26/waymo-...
I can't speak to their RVA operations.
Most people in society don't really have a desire to run amok.
1. They're mostly even more local. The Waymo Driver is in your car, the "driver" for say a DLR train is inside the train too. Unlike Waymo they aren't running multiple live feeds to remote oversight, even the emergency human intervention is literally on board with you. There's somebody wearing a uniform telling those tourists that no, Abbey Road is an outer suburb with a sewage pumping station, they're on the wrong train for the famous Beatles photograph. The person in the uniform is trained to drive the train if there's some reason the automation can't do it, nobody can do that from a control room miles away. In the even higher (and rare) GoA systems where nobody aboard can drive the train even if they need to, remote oversight still may need to dispatch a specialist to rescue a failed train.
2. They're mostly "grade separated" that is, they're either underground or suspended in the air, or maybe in fenced off ground-level areas, so you can't use a "hacked" train to hurt anybody except its passengers or maybe, in some cases, passengers on a nearby train.
Turns out cars are a bit too bulky and pricey to repurpose as shanks.
This seems like a good time to point out that the argument makes too many assumptions to be useful, like that moving fast and breaking things will in fact lead to faster progress overall. In the case of robotaxis, the group moving carefully and deliberately is the clear leader, and many competitors who took the faster/less careful approaches have shuttered along the way. When uber's self-driving division killed someone, for example, it didn't lead to an earlier arrival of self-driving.
This is relevant to all sorts of business stuff where we're always asked to move faster than we can reliably move. It's astoundingly easy to forget that sometimes bad rollouts can shutter a project even worse than slow rollouts.
Compare to a rideshare driver that will often drop you off right in front of your destination, even if that is an illegal maneuver.