- Commodity Arm SoC (or sometimes N100 or N150 x86)
- 8/16/32 GB of LPDDR5x RAM
- 'NPU' (usually unspecified) with ambiguous 'TOPS' number (like 20, 40, 80)
Usually specifics aren't provided, and TOPS is never defined in a technically useful way. The few times it is, are from more established companies (e.g. Asus or Raspberry Pi integrating a well-known NPU chip into one of their products).It's worse at this point than the peak of the crypto boom, when I was getting emails touting the next chain-of-proof software, or ledger-this/ledger-that. Now that there are a few actual use cases for this hardware, it requires more nuance to separate the wheat from the chaff.
And for me, I spend weeks, typically, with any hardware I _do_ review, running as many models and test runs as I can (and documenting everything on GitHub, in depth, with scripts so other people can verify). Most reviewers (like those with publications named in this post) either don't have the time, or sadly, the understanding, to test these devices in a meaningful way.
Therefore, random blog posts (which are getting harder and harder to find, amidst the AI-laden first 2-4 pages of DuckDuckGo and Google results) are the best source of information. Or sometimes a post on Mastodon, which is never easy to find since search isn't a thing there.
Edit: Ah, they did reach out around CES time. Funny seeing their pitch deck including a note on Dr. Miles Mi, with a row of logos on that page including Apple, MIT, Berkeley, DJI, VIVO, Tuya, and a few others, as if they were using this project or something?
Which is fine, but please disclose it. Otherwise, like in this case, I'm going to assume the author is a moron that can't write for shit who thinks their readers are morons that can't read for shit.
Yes, the site is new, but other posted articles are 100% consistent with the author wanting a guaranteed level of local inference with large models.
What I read was written by a skeptic who took claims and systematically addressed a number of issues. The debunking was concise and used simple sentences.
The boxes in the pictures were, to my eyes, generated manually using the macOS Preview annotation feature. They are not well aligned. I've used this technique many times to general overlays. If this were me, I'd get called out. I like nicely spaced and proportioned boxes! NB: iFixit tear downs are mis-aligned as well and it bugs me.
People have a distasteful habit of assigning others into boxes. Particularly if that box currently has a negative connotation. Boxing is a primary tool of the; misguided, bullies, sycophants, censors and those with an unspoken agenda. Humor: Which box applies?
There you go, two sentences without burying the lede.
Is it maybe competitive value anyways though? Even if you only think of the accelerators, 48gb+160TOPS seems comparable to some Strix Halo mini PCS with 64gb - lower memory bandwidth but a few hundred dollars cheaper. If they sold just the accelerator card for $800 or something that would be potentially very interesting.
https://wp.pureprogrammer.org/2025/12/20/comparison-of-orang...
But yeah, this should have been priced like 2x of a maxed out Raspberry Pi 5.
https://www.kickstarter.com/projects/tiinyai/tiiny-ai-pocket...
Including questions of LLM origin. Seems like the OP might have submitted that one (47431685) although there's another copy now (beyond this SCP entry from 3 days ago)
Given how subsidized the subscription plans are and the oss benchmaxxing vs real life performance- you're paying much more for far worse models.
>For perspective: a consumer NVIDIA RTX 4060 Ti (~$400) can run comparable 3B active-parameter MoE workloads at 70–90 tok/s with 100K+ context, depending on setup. The Pocket Lab lands around 6–12 tok/s at 8K–32K context.
>Same class of workload. Roughly 5–10× slower, at 3× the price, with tighter constraints.
Flagged.
Will be interesting to see if a public outcry will happen once these boxes start arriving at those who funded the kickstarter.
Every time I complain about this kind of useless AI slop I get downvoted to hell and get dozens of comments saying "it doesn't look AI at all", so I don't even bother anymore. It's incredibly sad, I expected much more from this community... But it looks like it'll soon be dead like the rest of the internet.