> The primary argument against all these AI gadgets so far has been that the smartphone exists. Why, you might ask, do I need special hardware to access all this stuff?
The proposed answer is because smartphones are too hard to use(???)
> To do almost anything on your phone, you have to take the device out of your pocket, look at it, unlock it, open an app, wait for the app to load, tap between one and 40,000 times, switch to another app, and repeat over and over again.
And an allusion to app stores being bad:
> And they’re not going to get better, not as long as the app store business model stays the way it is.
The part that's left unsaid is that none of these AI devices promise to be some new open model of computing. Instead it's a play at the same or more lock in than with app stores. The Humane pin, for example, requires an expensive ($24/mo) subscription to even use the hardware. The lock in is just in a new playing field where the incumbents don't have a dominant position yet.
Those complexities and problems are real, but the obvious value of a context-aware AI is reducing friction by hiding those complexities from the user. So they can state a goal and have the right tools to achieve that goal selected and/or used for them.
Forcing someone to use a half-dozen devices instead of a half-dozen apps is doing the opposite: It's increasing complexity and friction for the user, by adding extra steps between "this is my goal" and "goal accomplished".
I find the less I use my phone and technology, the higher quality life I have. Life seems quite nice with less data and no social media :)
Companies need to learn there's a lot of value in restraint and not acting like they're the most important thing in the world.
It isn't clear to me that given the nature of LLM's can we actually solve this problem. It isn't thinking critically and never will without it being a different tech. (Someone correct me if I am wrong, but it seems like this is just a fundamental problem with this type of tech).
There have been very very few actual use cases of what we are now calling "AI" that actually seem to provide any real benefit. The only one that I find myself using on a daily basis is helping look through and summarize my personal notes. Something that the nature of an LLM is ver well suited for.
Prolog, ontologies, computer vision, deep learning, classifiers, etc. all have been called AI despite being very different things and their inherent limitations. At this point AI is just a label thrown at the newest cool tech.
Right, and I am not being critical of the technology to diminish this achievement. It is a great achievement.
But it feels like we have moved so far past it being "just" an "LLM" to already considering it a general purpose AI when it just simply isn't.
The problem is it fakes it enough, in enough situations, that a lot of people seem to come to the conclusion it is.
100%. Bookmark this and come back in 5 years.
There are almost 2BN iPhones and Androids in existence and there is still room to innovate around mobile with AI.
Why buy another gadget just for a single use case with AI when your phone already has the capability and power to run local models on device?
Or not, because turns out your phone can do it all.
The white dot thingy which was here a week or two weeks ago?! is also nothing someone needs.
I don't know how smartphones might transform, but surely not into those shown gadgets.