Seems like more of a scale out than a scale up problem, with a view on the whole operation from recording footage down to final archive to tackle the real bottleneck.
Its possible from a PCI-E bandwidth point of view but its going to require some seriously specialist USB interfaces. I am tempted by the same solution others suggest, smaller amounts per machine less expensive and extreme solutions into normal switches and then out across fibre.
What a crazy thing to be doing, the mad situations companies get themselves into when they should just have networked cameras and VPNs or at the very least distributed ingress machines!
It still involves annoying work by humans, but it would be much more straightforward: See a bay that is showing the "idle" color, remove any finished card and put it in the finished-box, grab a card from the to-do box and put it in, then press the button to signal a new card is waiting.
If that process took 6 seconds, 1000 cards would mean 100 minutes of tedious work, but you can spread it over multiple people or (since the card-read takes time) do it in short bursts interspersed with other activities.
So from there, my thought is indeed: Why shouldn't we get 1000 micro sd card readers (or multi readers), attach those to a bunch of PIs streaming the cards to a central server, have people put their SD card in there at the end of a day? After that, you can kick off the reads in a batch, re-seat the cards that aren't set correctly and that's it.
One could reduce hardware cost by having some LED turn on or off if a card in a reader can be swapped, but with a thousand people doing this, complicated instructions tend to be a bad thing. Unless the transfer is fast enough to wait for it.
But it's always fun how complicated some of these physical problems become if you scale them up. Like, consider how many storm drains are out there in a city. If each needs to be touched once every 3 years... suddenly you've created a dozen jobs.
Only reading the comments did I think “Wait, who needs this and why?” I like the Mr. Beast angle - it’s fun to think about if nothing else.
I think at that point you'd just need to do 1 pc every 200 readers and pull the videos to a local server for processing. There's no magic here, just work.
I'm assuming the SD cards come from a fleet of dashcams or similar, and that the driver is responsible for turning in their card at the end of a run or whatever.
I'd deploy a fleet of tiny WiFi card readers. Just an ESP32 and a card slot. Each individual unit is not fast, but if you have a few dozen units running at once you could easily saturate the dedicated ingest WiFi network at each hub location. The readers are dirt cheap and the only supporting infrastructure required is WiFi and a bunch of 5v power adapters.
You'd have a rack of these guys next to the key locker or timeclock. Pick up your keys and a blank card, drop off your keys and card at the end of the shift. Could even get extra fancy and use LEDs to indicate which card is yours for the day, or flip on an error light when the card is worn out.
You probably don't need each video to transfer as fast as possible, more likely you just want maximum overall throughput. So just build a bunch of very cheap and slow nodes. On aggregate you should get decent performance.
I am also a bit perplexed on why they'd need to ingest 1,000 micro-SD cards at once. If they are such a large org, why not investigate alternative solutions?
Action cameras use microSD cards as their main storage medium. You could connect them via WiFi on some models, but that would be painfully slow compared to dumping the SD cards directly.
Seems to me a majority of the overhead with 1000 SD cards is in the filesystem translation layer, multiplied by the USB contention dealing with so many devices.
So if it were me, I'd have a big fat, fast mounted filesystem whose purpose is only to serve as a dd destination, get the dd done as quickly as possible with system bus buffer sizes, and then do the data transfer 'offline' once the SD has been mirrored...
For instance, if you can spend 2 hours downloading each card, (SPI to an esp32 can do about 32GB an hour) so you could Download 64GB in 2 hours via ESP32 boards with SPI uSD slots, pushed onto your server over wifi6.
If that’s enough, that’s a simple solution. If you need more than 32GB an hour per card, I’ve seen some adapters that hold 10? uSD cards and connect over SATA, so that might get you a better way into the bus than typical USB?
If they just need the slots to plug everything in and leave overnight you could probably just run off some PCBs with heaps of card slots per mcu and let it cycle through them one after another
Basically this scenario only makes sense if there is something horribly wrong elsewhere to the point that the only reasonable thing to do is fix things to involve a network, not to try to figure out how to scale a sneakernet operation.
Or, 1000 action cameras (GoPro or equivalent). I only know of one outfit that currently does that, and they definitely would need to dump all 1000 at once to keep production rolling.
I’m pretty sure, although not certain, this is an employee of MrBeast or an affiliate.
I mean, if done right you'd use some kind of wireless-while-they-charge at the end of the day. But given the wide world I could easily see someone somewhere going down this path.
LOL why are these people helping this guy for free?