I wonder. For all the money thrown into self-driving cars research, could we have had an autonomous rail system by now? The technology for mostly-autonomous rail is well understood. Most of the financial cost is in infrastructure to support the system. Seems to me self-driving cars try to short-circuit that infrastructure build-up. They try to "automate the device" rather than "producing an automated system that solves the problem of moving people and goods".
Specifically, I wonder if, for the cost and time spent on CPU-and-engineer-driven research and development of autonomous cars, if we could have had nationwide autonomous rail rolled out by now.
We already have autonomous rail systems. Its called positive train control and was fully implemented like a year or two ago (mandated in 2009, but you know how government works, lol) https://en.wikipedia.org/wiki/Positive_train_control
The train conductor has become more-and-more automated to remove the chance of human error. It works with a system of very reliable sensors that indicate where every train engine is on the rails.
Given the huge amount of cargo any particular train has, I don't think there's any intent on cutting the last two humans (the conductor + engineer) out of their job. Their salary costs are miniscule compared to the safety value they deliver, even if the job of driving a train has been almost entirely automated away by now.
Using the B method: https://en.wikipedia.org/wiki/B-Method
https://link.springer.com/content/pdf/10.1007%252F3-540-4811...
https://arxiv.org/pdf/2005.07190.pdf
All this was developed in the 80's and 90's. It would be interesting to see how this evolved. Obviously with a ML/AI approach it would be different now. Although there might be some ways to specify some constraints or boundaries to an AI system, for safety, comfort, physics, etc.
Is it? The theory of block signals and path signals don't change because AI was invented. And I have my doubts that AI could do better than path-signal algorithms.
https://en.wikipedia.org/wiki/Railway_signalling
OpenTTD players, wassup? You can run these block signals / path signals in video games. Its a bit complicated to setup and the terminology is arcane... but the algorithms aren't so advanced that they require "AI" or anything. (If at block signal, wait until exit signal is ready. Etc. etc.). Its actually pretty fun (OpenTTD is a videogame built around these signals!!), because if you mess up your signals, you get deadlocks and/or "race conditions" (aka: risk of a deadly crash). But once you get used to the methodology, you can build incredibly complex junctions that automatically and safely routes trains everywhere.
That's why you had legions of railway nerds playing with trains in their basement all day: they're trying to get their path signals / block signals / chain signals correct so that those toy trains traverse their toy-tracks automatically without crashing. These concepts were like, from the 1930s or something (Path signals are newer)
EDIT: Apparently "before 1923": https://www.amazon.com/gp/product/117705681X. Definitely an ancient and arcane art of old wizards. Railway engineers were dealing with semaphores, race conditions, and deadlocks __LONG__ before us programmers even existed!!
> On December 29, 2020, FRA announced that PTC technology is in operation on all 57,536 required freight and passenger railroad route miles, prior to the December 31, 2020 statutory deadline set forth by Congress.
We got that. We literally got that.
If the sensor in X is broken, then the server will say "I cannot prove it is safe to move through X", and the train will stop.
I'm not a transportation engineer however, so I'll defer to anyone with actual experience in the field. But the idea is that our sensors are so effective today, that it is better to "prove" each leg of the journey is safe with "positive train control", rather than the opposite normal approach. (Ex: Sensor detects a problem, which stops the train).
That is to say: all sensors in the USA are in positive-train control mode. Trains will therefore stop if any sensor malfunctions. We're now in a "default stop" state for all trains, unless those sensors are working, deployed across USA.
Actually laying the infrastructure for mass transit via rail is an entirely different league of cost from what has been dumped into self driving cars.
We have a hard enough time agreeing on how to do light rail transit in places that want it, and then actually getting it done.
Even if it does succeed, it seems to be about convenience anyway.
Also, it was a result of a mass shooting. Have some tact.
What problem does autonomous rail solve? The single driver is already a rounding error in total costs. Also, rail is already a controlled environment where collisions are much less likely to happen than road, so the fruit is much higher up the tree on that aspect too.
It seems to me that bringing autonomy to rail would have little on it's bottom line.
I realised this while discussing self-driving cars with my friends.
I used example of Uber Eats. The problem statement is "I don't want to cook" and a reasonably acceptable solution IMO is cloud kitchens + delivery. As opposed to building a cooking robot.
Cloud kitchens could automate 80% of repeatable stuff because it makes sense to solve that problem at scale.
Standardize the ovens you distribute around the world. Simplify food cooking so that its just "put this pre-cut prepared block of aluminum foil into the standard-temperature oven for 15 minutes", and "deliver".
Which networking protocol best maps to this?
And what if we had smart traffic lights that were aware of every car in an surrounding area of an intersection...
I mean FFS certain tech companies track all vehicles that drive by/near their corporate campuses and report that back to the city...
And that's almost a decade old now...
So apply the same but report the data back to the traffic management system which is also trained on all the traffic patterns for a given intersection to best optimize for their patterns...
Even WiFi avoids relying on collision detection by routing client-to-client traffic via the AP instead of being peer-to-peer.
Maybe BGP instead? "Route failed - go this way instead, as a backup"
This hypothetical armored car needed many features: the most important was that it must be able to move across the muddy no man's land reliably.
Tests have shown that regular sized wheels would get stuck in the mud. A bigger wheel has more surface area and greater contact area. So the Russians built an armored car with the largest wheels possible. Russian tests were outstanding, the Tsar tank rolled over a tree !!!!
https://en.m.wikipedia.org/wiki/Tsar_Tank
The French design was to use caterpillar tracks. We know what works now since we have a century of hindsight.
--------
Spending the most money to make the biggest wheel isn't necessarily the path to victory. I think it's more likely that the tech (aka, caterpillar track equivalent) hasn't been invented yet for robotaxis. Hitting the problem with bigger and more expensive neural network computers doesn't seem to be the right way to solve the problem.
The model of "someone will find this training computer useful" is... fine. Google TPUs, NVidia DGX, Intel Xe-HPC, AMD MI100, Cerebras wafer scale AI. These are computers that nominally are aiming for the market of selling computers / APIs / SDKs that will make training easier.
Its a pretty crowded field. Someone probably has struck gold (NVidia has a lead but... its still anyone's game IMO)
-------
If Tesla's goal is to compete against everyone else (or make a chip that's cost-competitive with everyone else), Tesla needs more volume than (allegedly) 3000 chips (quoted from the article: I dunno where they got this figure but... there's no way in hell 3k chips is cost-effective).
That's the name of the game: volume. The reason why NVidia leads is because NVidia sells the most GPUs right now, which means their R&D costs are applied to the broadest base, which means those company's engineering costs (aka: CUDA training) is spread across the widest number of programmers, leading to a self-reinforcing cycle of better hardware, lower costs, with a larger community of programmers to learn from.
I don't really think it's that many.
The industry collectively sank untold billions into the blind belief that neural algorithms will somehow turn into "AI."
10 years later, no "AI," and not even a single money making niche use.
Right now the industry is deep in sank cost falacy, and people who promised this, and that to investors are now desperate, and doubling their bets in hopes that "at least something will come out of it...," a casino mode basically.
There are tons of money making niche uses of neural networks. From the branch predictor on your CPU, to trading on the stock market, to image-search engines.
The huge qualitative differences between GPT-2 and GPT-3 seem to suggest that they will, if you just keep adding orders of magnitude more connections and more data.
Even assuming that it's true (which I very much doubt - anyone that's willing to spend enough money with Nvidia, can have powerful supercomputer fairly quickly), it's very dishonest statement. It's comparing deployed system with a lab prototype of a single competent of potential supercomputer, that may be fully operational in few years (software is a really, really, really big deal here).
Tesla's compute-to-researcher ratio is definitely rare
https://github.com/mlcommons/training_results_v1.0/tree/mast...
https://www.youtube.com/watch?v=j0z4FweCy4M&t=8047s
It sounds like they're going to have write a ton of custom software in order to use this hardware at scale. And, based on the team being speechless when asked a follow up question, it doesn't sound like they know (yet) how they're going to solve this.
Nvidia gets a lot of credit for their hardware advances, but what really what their chips work so well for deep learning was the huge software stack they created around CUDA.
Underestimating the software investment required has plagued a lot of AI chip startups. It doesn't sound like Tesla is immune to this.
Can you substantiate this concretely? How about a list, with direct sources? (Not opinion pieces.)
That's just from the article; off the top of my head:
* NYC to LA fully autonomous drive by 2017.
* 1M Robotaxis on the road by 2021.
* Hyperloop.
* Solar roof tiles.
* All superchargers will be solar-powered.
* Tesla Semi.
Sure, some of these things may be "just around the corner" or "ramping up now", but some of these are claims going back almost 5 - 10 years where Elon says "2 weeks", "next year", "2 years", really whatever it takes to be just believable enough to get enough people to buy into a future where Tesla is worth 10x what it is today.
There's something highly off-kilter with the relative mildness of the above and the vitriol of the criticism directed at it. This actually makes me feel really good about Elon Musk and Tesla's prospects!
Dense numeric processing for image recognition is a key foundation for what Tesla is trying to do, but that tagging of the object is just the beginning of the process, what is the object going to do? What are its trajectories, what is the degree of belief that a unleashed dog vs a stationary baby carriage is going to jump out?
We are just beginning to scratch the surface of counterfactual and other belief propagation models which are hypersparse graph problems at their core. This kind of chip, and what Cerebras are working on, are the future platforms for the possibility of true machine reasoning.
> but the short of it is that their unique system on wafer packaging and chip design choices potentially allow an order magnitude advantage over competing AI hardware in training of massive multi-trillion parameter networks.
I kind of wonder if Tesla is building the Juicero of self-driving. [0]
Beautifully designed. An absolute marvel of engineering. The result of brilliant people with tons of money using every ounce of their knowledge to create something wonderful.
Except... you could just squeeze the bag. You could just use LIDAR. You could just use your hands to squish the fruit and get something just as good. You could just (etc etc).
No doubt future Teslas will be supercomputers on wheels. But what if all those trillions of parameters spent trying to compose 3D worlds out of 2D images is pointless if you can just get a scanner that operates in 3D space to begin with??
[0] https://www.theguardian.com/technology/2017/sep/01/juicero-s...
but pure RGB needs $millions to make a reliable realtime depth sensor, plus custom silicon and a massive annotated dataset.
It might just be that one company can do it, but its a hefty gamble.
If that was all that was needed then it would be done.
New hardware architectures can't really be used to their full potential without years of research into techniques that are suited for them. The more people who have access to the hardware, the faster we can discover those techniques. If Tesla is serious about their hardware project, they need to offer it to the public as some kind of cloud training system. They don't have enough people internally to develop everything themselves in a short enough time to remain competitive with the rest of the industry.
Any idea how OP made that conclusion?
My GeForce 1080Ti has 1.3MB of in-core L1 caches (28 streaming multiprocessors, 48kb L1 each). It also has L2 but not too large, slightly under 3MB for the whole chip.
The GPU delivers about 10 TFlops of FP32 which needs 2x the RAM bandwidth of FP16. I’m generally OK with the level of performance, at least until the GPU shortage is fixed.
Any "astute readers" here who know who the partner would be?
To my limited ASML knowledge, ASML is a fab equipment maker, not a silicon IP procider.
Like Melkman says, Broadcom is a good guess. Not only for past rumors, but iirc they also did work with google's TPU (could never figure out if that was actually confirmed?). Interconnect IP like that is definitely in their wheelhouse.
What exactly is CFP8? How many bits does one instance of CFP8 use? What mathematical operations are supported? How does one configure the floating point?
https://www.johndcook.com/blog/2018/04/11/anatomy-of-a-posit...
Perhaps CFP8 are parameterized 8-bit posits where the parameter is the value es. The larger es is, the greater the dynamic range is at the expense of precision. Two examples:
posit<8, 0> (es = 0) has as largest positive number 64 and the smallest positive number 1/64.
posit<8, 1> (es = 1) has as largest positive number 4012 and the smallest positive number 1/4012.
The formula for the largest positive number for 8-bit posits is:
2 ^ 2 ^ es ^ 6.
posits don't have NaNs and only one infinity (±∞), so they can use more of the 8 bit values for numbers than floating point numbers.
I wonder: is CFP8 = posit<8, es>?
it's interesting because it's clearly exciting & leading edge tech. unlike most Tesla tech which ultimately has consumers using it, where we all get to assess strengths & weaknesses, this tech is going to remain inside the Tesla castle, unviewable, unassessable. we'll probably never know what real strengths or weaknesses it has, never understand all the ways it doesn't well, or as well as competitors. it's going to remain an esoteric dollop of computing.
So, where are the so-called robotaxis?
And lidar wouldn't be expensive if manufactured in automotive volumes. Certainly less, per vehicle, than Musk charges people for "full self driving" at the moment.
California allows autonomous vehicles to be tested on the road, so long as every disengagement is reported (along with total miles driven etc). Waymo is testing, reporting mileage and disengagements. So are Toyota, Nvidia, Mercedes, BMW, Cruise, Lyft and Apple.
Guess who's too shy to have driven a single autonomous mile in California, where faults have to be reported? That's right, Tesla!
Tesla might be able to make vision-only driving work. But Musk has been promising deadlines then failing to achieve them for years. They've put all their chips on 'no lidar' and they've had a bunch of problems that lidar could trivially solve - such as detecting a fire truck or concrete barrier right in front of the vehicle. So it's far from obvious to me that they've got a winning approach.
Apparently they have neither as they have missed their deadline three years ago and continue to miss it every year since.
> All the other players try to solve this with lidar and cars that cost around 500k to build
Citation needed?
> Tesla may need another 10 years
What has you this pessimistic? Tesla promises full self driving by the end of the year every year. Are you saying a random commenter on the internet knows more about the state of their AI then they do?
> This approach will never solve L5.
Again, Tesla advertised FSD as Level 5 and ready for completion with the robotaxis for 2020. Sounds like it was falsely advertised right?
“they have pretty much 0 data except for the maps they generate themselves”
What do you mean by this? You realize the bottleneck for training data generation is always human labeling, not raw amount of data, right?
Comma has been using that approach from the start with a cheap smartphone-like device.
No amount of "training" can fix the problem of "AI" not being AI
These people have very poor idea what they are talking about when they say the phrase "artificial intelligence." It's a clear misuse