Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments. This is low enough already that there isn’t a huge cost benefit to optimizing much further, especially given how useful it is to have humans review things in certain situations.
The stat quoted by nyt is how frequently the AVs initiate an RA session. Of those, many are resolved by the AV itself before the human even looks at things, since we often have the AV initiate proactively and before it is certain it will need help. Many sessions are quick confirmation requests (it is ok to proceed?) that are resolved in seconds. There are some that take longer and involve guiding the AV through tricky situations. Again, in aggregate this is 2-4% of time in driverless mode.
In terms of staffing, we are intentionally over staffed given our small fleet size in order to handle localized bursts of RA demand. With a larger fleet we expect to handle bursts with a smaller ratio of RA operators to AVs. Lastly, I believe the staffing numbers quoted by nyt include several other functions involved in operating fleets of AVs beyond remote assistance (people who clean, charge, maintain, etc.) which are also something that improve significantly with scale and over time.
> Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle. The workers intervened to assist the company’s vehicles every 2.5 to five miles
The NYT is definitely implying 1.5 workers per vehicle intervene to assist driving at first read. Only after reading the above comment do I notice that they shoved the statements together using different meanings for “workers” as they didn’t have the actual statistic on hand.
Basically, I am curious if these remote assistant drivers are located in foreign nations without American licenses. And if so, how did you get them cleared to be able to “drive” cars on Americas roads?
Thanks
Ps: I took a cruise once in Austin and it needed remote assistance.
Hoo boy, sure wish the NYT had clarified that. That changes things significantly.
I suspect this is a moment where news are looking for a scapegoat / villain from the AV sector, and your team is an easy target given what has happened recently.
I believe that transparency is the right way to address issues and concerns. Please keep doing that.
Funny, since I thought full autonomy was the goal of the company. 2 percent human intervention isn't scalable.
This puts your cars and the safety of the whole city at the mercy of reliability of mobile networks. This is a fundamental architectural change in the design of the city. Do the telecom operators take liability if they can't meet their designed SLOs for availability? What are the worse case scenarios that you have considered?
Word of mouth via tech influencers is far more important than you're giving credit.
What the answer glances over is that even with just 3% of the time requiring human assistance (2 minutes out of every hour) the term ‘autonomous vehicle’ is not really applicable anymore in the sense everybody is using/understanding that term. The idea behind that term assumed ‘full’ autonomy. Self driving cars. And there is no reason to assume that this is still in sight. The answer puts that ‘self-driving car’ on the shelf.
PS. Human assistent seems to me a difficult job, given the constant speed and concentration requirements.
This is the stupidest idea I’ve ever heard
But I am betting that quite a lot of the electronic components of cars these days are tied to things, my DPF is a great example, that come from safety and environmental regulation. If you pull the ECU out and tricked the motor into running anyway, I am betting your emissions profile will suck massively. Ditto transmission. The rest of my car seems to involve safety features, sensors and cameras mostly.
By the time you reinvent the car to exclude all these things, then make it roadworthy again I reckon you would end up with almost exactly the standard modern car again. Car makers arent putting computers in for funsies.
> durable and profitable "dumb" cars
Both of these are pretty much wrong.
First of all, just based on regulation and safety, the car is gone have a huge amount of electronics. Second, regulation about emissions also require a huge amount of electronics. You can't get away from that no matter how dumb you want to make a car. Maybe you don't like it, but society prefers less people to die even if it is not inconvenient for you.
Granted, in many way an old inefficient smaller car is still saver and better for the environment then a modern huge car. But that i a failure of the regulation.
Next the idea is that such cars would be profitable. This is simply inaccurate. Car companies can barley make money on cars as it is, in fact, without parts supply they don't really make money on those cars.
Additionally most consumers simply prefer to buy cars with lots of multimedia options and things like that. Having the ability to warm up your car before you get in is useful for example. Having phone conversations in your car is useful. Having GPS in your car is useful. People like having heated seats. People like having good sound in the car.
People simply aren't buying the cars you seem to be demanding and doing such cars simply wouldn't be profitable. In fact, generally if you look at China you will see an increase in the types of features you don't like.
To suggest an autonomous car company gets into that business makes no sense at all.
And I say this as somebody that doesn't own a car and generally thinks cars should be replaced as much as possible and band in many places.
There are many ways repeatability could be improved without going back to 1960s cars. Many of those should be embraced but you will simply not get around some inherent complexity of the modern world.
If you want a brand new car for $10k you’re going to need a time machine, or move to a country without modern safety standards.
Given everything you know now, was it wise to push for expansion over improvements to safety and reliability of the vehicles? On one hand, there is certainly value in expanding a bit to uncover edge-cases sooner. On the other hand, I'm not convinced it was worth expanding before getting the business sorted out.
My guess is that given the relatively large fixed costs involve in operating an AV fleet, that it makes some sense to expand at least up to that sort of 'break even' point. Do we know what that point is? Put differently, is there some natural "stopping point" of expansion where Cruise could hit break-even on its fixed costs and then shift focus towards reliability?
Maybe the article answers the following, but don’t know since I haven’t read it yet.
- median, p95, p99 latencies for remote assistance
- max speed vehicle can go when RA is activated.
I don’t trust proper attention will be given to improvements in tech once profit and roi is considered compared to human labor costs especially in lower wage nations.
I don't think this is a viable strategy though given the enormous costs and challenges involved.
There doesn't exist a short-term timeline where Cruise makes money, and the window is rapidly closing. They needed to expand to show big revenues, even if they had to throw 1.5 bodies per car at the problem.
Prediction: GM will offload cruise, a buyer will replace leadership and layoff 40% of the company. The tech may live to see another day, but given the challenges that GM has generally (strikes, EVs, etc), they can no longer endlessly subsidize Cruise.
If this lets them have the only level 5 system on the market they could double that and millions would happily pay. Suppose your a trucking company would you rather pay 50k / year or 5k/year? That’s a stupidly easy choice.
Americans drive roughly 500 hours per year. If they can replace 98% with automation and the other 2% with someone making 20$/hour that only costs them ~200$/year, which then drops as the system improves.
Negative unit economics and massive expansion are not.
Imagine a car rental service where someone in an office building drives the empty car to you, then drives it back when you're done with it. No taking public transportation just to get back to the garage to pick up the next drop-off. Imagine simply swapping the driver controlling a ling haul truck remotely when it's time for a shift change. With good handoff the truck can be driving 24/7 without ever slowing down.
Really the only autonomy you need in that situation is enough to pull the truck/car/whatever over and park it if the connection is lost.
Tesla has the scale and for some reason regulators give them a pass. I wouldn't bet against Elon, but we aren't there yet...
...
> Cruise’s board has hired the law firm Quinn Emanuel to investigate the company’s response to the incident, including its interactions with regulators, law enforcement and the media. / The board plans to evaluate the findings and any recommended changes. Exponent, a consulting firm that evaluates complex software systems, is conducting a separate review of the crash, said two people who attended a companywide meeting at Cruise on Monday.
After the first [edit: the first performative charade, about little girl in a stroller], why should we trust the second isn't also a performative charade? What independence or credibility does some hired law firm have, that the company itself does not? How about using an independent third party?
Its a fallacy everyone conveniently ignores. The woman the Cruise car ran over was actually first hit by a human driver who is still at-large, not a peep about him. The press kinda just accepts this as the "cost of doing business".
The way I see it, Vogt sincerely believes autonomous cars will make things safer from the #2 killer of Children under 19 (outside of guns) by a wide margin [2] and therefore accelerated the rollout past what was safe. I see no evidence otherwise.
[1] https://www.ted.com/talks/chris_urmson_how_a_driverless_car_...
Many people have to be killed AT ONCE for it to be news worthy these days.
To me, that's evidence that it's performative. First, it's a talking point; it looks, smells, walks and talks just like typical corporate/industry framing and messaging, with even a 'think of the children!' line, and the redirection (from the safety of autonomous cars, the topic, to whatabout something else). Second, its repetition by Urmson is further evidence - that's how talking points work. Third, the public's reptition of it, in surprising detail, such as in your comment, is also what we'd expect. Finally, throw in some tears, 'I get emotional' lines, etc. (per the NYT article), and I don't know how it can be missed.
Could it all be legit? Anything is possible - including fully autonomous cars!
Cruise barely drove a few million miles with new modern cars, good weather, the ability to choose optimal roads and weather, and yet it already severely injured a pedestrian.
We can argue about Cruise hitting the pedestrian, but reportedly, the major injuries were caused by Cruise, after reaching a complete stop, deciding it has to clear the road, and dragging the screaming pedestrian and ending with the axle over the pedestrian.
How is it performative?
Is it not sad that a 4-year-old girl in a stroller got killed by a car? That it barely made the news?
Or is that just not sad and is normal these days?
Spare me your trolling.
Back of the napkin math, cars drive at an average of 18mph in cities, so every 10-20min. Let’s assume it takes over for 1min, and that you need remote drivers not too far for ping purposes, so at the same hourly rate. To guarantee you’ll be able to take over all demands immediately, due to the birthday paradox, you end up needing like 30 drivers for 100 vehicles? It’s not that incredible of a tech…
Let's model it as every takeover is 1 min, and vehicles need help 5% of minutes. Then you'd model the # of vehicles needing help in any given minute as a binomial distribution with p=0.05 and N=100, and you find you get 99.99% of the time you need less than 15 drivers per 100 vehicles. By 20 drivers, you cover all but 2e-8 of the time (or once per century).
But that's a bit misleading. It's a small-size effect. With 10k cars, you get cover all but 4e-6 of all minute periods with just 600 drivers (0.06 per car).
By 100k cars, you have 44 9s of reliability with 0.06 drivers per car.
There's some more complicated things that arise since probably the distribution of how long vehicles need help will be Poisson distributed (with an average of 1min) most likley, etc. But the point will stand, for large fleets you only need a modest margin over the average rate to get effectively complete coverage under normal conditions. It would only be in really extreme situations, like a hurricane messing up badly a lot of the Eastern seaboard or something, that you'd maybe run into issues (which, admittedly, is a real potential problem).
Its the disengagement rate that drives the number of operators you need per driver and therefore the economics. Theoretically, this rate should be improving steadily at all these companies.
Cruise seems to have a bad disengagement rate right now (<5miles seems really low), but methinks nytimes might be partaking in some obfuscation here.
Waymo's should be much better already. Curious by how much though.
What? That's literally insane compared to the current standard of 100 drivers for 100 vehicles. They're literally reducing 70% of the labor cost compared to uber/lyft/etc.
It's pretty reasonable to expect that this will improve over time as well. This is exactly how you want a startup to roll out a new technology.
* Build a pretty good base implementation
* Do things that don't scale for the edge cases
* Reduce the things that don't scale over time
Even if they can only improve this to 10 for 100, that's still a massive improvement.
In my area, a small, rural city, this would literally be a game changer. Right now, there's a single Uber within 15 minutes - if I'm lucky. Meanwhile, cruise could drop a handful of car in town, let them idle (at no cost), then pay a driver for a few minutes of intervention every now and then.
This also enables intercity transit. Most of that is highway miles. Outside of the start and end, those are easy and predictable. You could have dozens/hundreds of miles where Cruise can compete with the cost of privately owned vehicles.
Lastly, this makes it feasible for Cruise to reposition cars between cities without huge costs. Currently, that's basically impossible. Any human driven car needs to offer the driver a ride in the opposite direction.
Not saying it’s right or wrong, just stating the other half of the equation
my subaru can avoid accidents, it can even avoid things that 100% would not be an accident, even on black ice; so i don't think this is what the remote drivers are doing.
If that's correct, then the remote signaling of a problem and the human's response and control must have flawless availability and low latency. How does Cruise achieve that?
Cellular isn't that reliable. Maybe I misunderstand something.
An overwhelming majority of Americans will choose 45,000 deaths in car crashes annually (last year's number) in human-driven cars over 450 deaths/year with all self-driving cars.
In the American (and probably ALL) mind(s), human agency trumps all.
https://www.vox.com/2016/3/11/11586898/meet-kyle-vogt-the-ro...
https://en.wikipedia.org/wiki/Captain_Laserhawk:_A_Blood_Dra...
I hope Chief Safety Officer isn't just a sacrificial lamb job, like CISO tends to be.
Is the "interim" part hinting at insufficient faith, and maybe future blame will be put on how the VP Safety performed previously (discovered after the non-interim person is hired)?
> [...] and said she would report directly to him.
Is the CSO nominally responsible for safety?
Does the CSO have any leverage to push back when their recommendations aren't taken, other than resigning?
Cruise employees worry that there is no easy way to fix the company’s problems.
Company insiders are putting the blame for what went wrong on a tech industry culture.
What, because car companies with car company culture are doing such a great job building self-driving cars?
I'm rooting for both Cruise and Waymo here. Self-driving cars would be great for humanity. Good luck to the teams working hard to make them happen.
It may be true that statistically fewer fatalities per mile happen with autonomous cars than with human-driven cars. But that's irrelevant. If the car kills one person because it did something utterly stupid like driving under a semi crossing the highway or dragging a pedestrian along the ground, the public will not accept it.
This is another example of the uncanny valley problem: Most "smart" devices are merely dumb in new ways. If your "smart" gizmo is only smart in how it collects private information from people (e.g. smart TVs), or it's merely smarter than a toggle switch, that's not what the public considers smart. It has to be smarter than a reasonably competent human along almost all dimensions; otherwise you're just using "smart" as a euphemism for "idiot savant." Self-driving cars are a particularly difficult "smart" problem because lives are at stake, and the number of edge cases is astronomical.
Regarding Cruises' suspension, how likely is it that the backup driver restarted the car to drive again after the car stopped with the pedestrian below?
Most people are pretty sure my theory is wrong. I have absolutely no evidence this is true, it's just some crazy idea that popped into my head one day.
Like imagine there's some industrial block in Da Nang where there are thousands of guys and gals who think they're RL for some AI model somewhere. X takes a bathroom break and forgets to turn over the controls to another specialist and when he gets back he discovers the model has crashed.
Next he reads on the news that there's been a fiery Tesla crash somewhere near Oakland, and he realizes that something is horribly wrong in his world.
We could use multiple predictive language models to determine what direction the story line takes next, but I imagine he quits his job right then and there and is determined to find out the truth behind the program.
What will happen next?
Better yet, base the story off-world so that we aren't so close to the horrible reality of it -- if this is true and it's probably not.
Government Motors can only sustain such a loss on their books for a short time. This is probably why Vogt has been pushing so hard for market dominance.
What the article says is that Cruise vehicles need some sort of assistance every 2.5 to 5 miles (I highly doubt this number is accurate). Not that they’re getting into emergency situations that frequently.
It isn’t remotely the same. This would be like if human operators typed some of chatGPTs answers
Getting a remote driver connected might a while, but afterwards it seems like a mostly solved (in practice) problem.
I'm imagining a situation where a car comes across a parked truck on a one-way road (common in cities). A human operator comes in the loop to ensure that it's actually safe to switch lanes and pass. Check for things like emergency vehicles, unusual pedestrians, etc. They don't need to literally take the wheel, just confirm that the vehicle can take a specific action.
Title of the post should be edited though since it's not the headline of the piece and this information, while interesting, isn't the main thrust of the article.