I don't think a lot of people back in 2007 could have predicted that the biggest thing to come from mobile would be an app that let teens remix music videos and share with their friends.
This is why I think it is a little pointless to try and create mental models for what products and features to build to capitalize on AI (though it can be fun). It's so early that we're not capable of understanding what's possible yet. If anything, we're probably at the viral "fart app" stage that mobile was in for its first few years.
I think that the biggest thing to come from mobile was always available location, exemplified by Google Maps which already existed before mobile happened. Many of the most successful apps relied on this.
TikTok is different, in that it could have existed on desktop (but would have looked very different) whereas Uber (for example) definitely couldn't.
I wouldn't under estimate the amount of friction reduced in having an app on your phone which a) is always with you, b) can also record top quality video and c) has a data plan good enough to upload there and then.
(Perhaps SMS plus some feature-phone-level of GPS integration.)
>What if I want to keep my history on but disable model training?
We are working on a new offering called ChatGPT Business that will opt end-users out of model training by default. In the meantime, you can opt out from our use of your data to improve our services by filling out this form. Once you submit the form, new conversations will not be used to train our models
https://docs.google.com/forms/d/e/1FAIpQLScrnC-_A7JFs4LbIuze...
(There might be other concern, eg around privacy, about giving your data away. But worrying about value isn't really one of them as an individual.)
You mean how fast that took, right?
So, yeah, there was a maybe five year (and certainly less than ten) period when smartphones went from a fairly niche thing to ubiquity which is very fast compared to technology adoption generally.
Then overlay that with the limiting factors for AI/LLMs.
I had a third-party logistics startup in 2007 that I would have loved to turn into Uber for shipping, but it would be at least a few years before the cell networks and ownership of smartphones reached a point where it was possible.
I don't know if there are limiting factors that will delay LLMs potential to disrupt the status quo.
That's funny for me to read almost 15 years after rjdj [0] There is a difference between an idea that is possible, and an idea whose time has come.
Ignore all else and get the company name infront of the cheque books as quickly as humanly possible. Product-market fit, MVP, bootstrapping and stealth are naughty words that have no place in Hype Cycles.
No further strategy required, however, for the advanced entrepreneur - be aware that all cycles have a bust phase - and this time it is not different.
(unless you wrote blog posts and "content" - in which case copy the article you wrote about Product Strategy in the Age of NFTs a couple of years ago and swap the crypto for ai - you will get lots of clicks and nobody will notice)
Of course, this could have never happened, as the temptation of "AI" for marketing products is irresistible and the confusion between "real AI" and "new generative LLM tools" is actually beneficial for companies.
I don't know what will happen to the term "AI" after this hype cycle dies down.
I guess we'll enjoy adding "AI means artificial intelligence, not generative text" to that statement for the rest of our lives.
If you think it can, did you not put much stock on the Turing test before they passed it?
If that was true there wouldn't be that many funded failed startups. In fact there might not even be startups at all. The investors can take the funds and just find someone to do their binding instead of randomly "investing".
Interest rates were high in the "dot com" era and that didnt stop them. If you raise 100mil now you can make your payroll and your AWS bill from the interest alone.
For example, they talk about AI-generated copies of your voice becoming the way people communicate with each other. But who wants to listen to a computer copy of someone else's voice? No one. Maybe it will replace the pizza shop guy answering the phone, but it certainly isn't going to replace real conversations between friends and family members.
I saw another app that uses a small number of family photos to generate the surrounding scene where the photos were taken. Again, it's just a gimmick – family photos have value because they are memories of real events, not because of the intrinsic nature of the photographic paper.
If I were a betting man, I would bet on a major backlash to this sort of "automate everything" approach and a serious counter-culture to arise in the next decade or two.
They of course ignore the fact that most people don't want to be talking to a robot and would take the human any day of the week. And most people create things as an outlet for their creativity or whatever else, not (solely) as a way to make oodles of money.
The company I work for provides tools for Support teams, and there's been talks from the higher ups about "automating away 90% of conversations", which basically translates to us auto-closing 90% of all incoming messages for our customers based on some "AI" decisions. The only people who buy into it are the CEO/CTO and their direct underlings, everyone else in the company realizes how fucking stupid and shortsighted that is, but they don't care. It's the big hype thing, all the competitors do it regardless of how idiotic it is, and our customers want to get rid of as much human labor as possible.
I think you underestimate the cultural desire and pressure for a perfect presentation of one's self. It started out with mass marketing, where every advertisement and authorized photos of celebrities published in the last 50 years are in some way retouched, cleaned up, photoshopped. This cancer spread to social media and metastisized it with filters. Now Zoom by default smooths my patchy face. The next logical step is basically VTube but with your own face instead of an avatar. Conventionally attractive people have huge advantages, after all. If it starts to be normal, then those who don't will be disadvantaged. Maybe family calls are different, but in professional settings where you're trying to influence others, it's an advantage.
Id imagine a lot of people would be up for doing this if it improved call quality..
If you could reconstruct a persons voice from text in realtime locally on your phone you would only need to transmit compressed text during the call reducing the bitrate a tonne. And those volume issues and difficult to remove voice artefacts would cease to be a problem too (assuming the senders side can still transcribe correctly).
Customers. Having customers is an advantage over the next guy who doesn’t, because now you can start customizing your product for unique needs rather than having a generic crud app.
Custom models. A custom model means some kid can’t just replicate your app easily.
Unique data. Data which is infeasible for another company to acquire or replicate.
Special people. People who will give your startup an edge in creating all of the above.
As someone leading a product team which "only calls apis" in this context: this is a very premature take. Hear me out :-)
Robust LLM-powered software requires
* very thoughtful design of prompt templates
* understanding of top_p and temperature in the context of said templates and their parameter space
* very thoughtful design of representative test cases for a given combination of prompt template and api params. without these, you're not even able to reason about the value range of the function you're developing
* execution and evaluation of those tests
* maintenance of all above
...and that's just talking about ensuring the desired output types in one closed context. I won't go into the creativity required to solve more complex problems (content injections, for one). Let me just say this: I won't lose sleep because anyone could just replicate our applications. The opposite is the case: I invite anyone to try and catch up. Good luck with that.
What you wrote might apply for prototypes of zero-shot applications, but not for production-ready software, letalone production-ready software that solves problems which involve more than one isolated LLM-call.
There are very few web applications that have any real proprietary implementations that are impossible to replicate. Its a combination of factors that builds the potential moat to the business.
A) You build a MVP/PoC on top of an existing model to quickly find the value prop and product/market fit
- That won't build a moat and won't be efficient to tackle the problem at hand (as laid out by the No Free Lunch Theorem), but will help you zero in on the opportunity
B) Once you have this sorted out you build your custom model with your unique data to build your moat and perfect your product
The right time to go for funding, in my opinion, would be closer to moment B, as building that custom model is where you'll need good capital (specially to hire the talent who will help you do that).
This is something their CTO reaffirmed last week at another talk.
This dynamic raised into question the viability of ML research as a core business strategy. Take Midjourney, for example. They've made significant strides and achieved dominance with their advanced text-to-image generation technology. But if a product like DALL-E 3, or its successors, could render their entire offering redundant in a few short years, than it's a tricky path for a company to take.
To me, this suggests that the actual "new strategy in the age of AI" is that tech companies need to transition from relying on their tech edge as their competitive advantages, to relying more on more stable moats. For example, the network effects rooted in two-sided marketplaces. It also hints that tech giants like Google, who above all relied on their tech advantage, could face existential challenges in the coming decade. A sort of a win-or-die situation. While companies like Amazon might be in a more stable ground for now.
Data is the new oil in the age of AI. The companies that do well will have products that siphon context enriched user behavior, build a strong brand with user loyalty, and effectively capitalize on the collected data to automate some expensive task. These data collection apps will be designed to break down and gamify tasks in such a way as to maximize the training value of the resulting data stream.
For example, imagine an IDE with an integrated stack overflow type service, where people could do collaborative coding or request help and get answers inside the application. That would give edit-by-edit updates, console output, problems with solutions and user solution preference. The company that owned that data would have a huge leg up on the competition in terms of creating AI software generation tools.
I'm not so sure. Don't they have a huge data moat that's hard to compete with in a data driven age?
I'm thinking the data that optimises advertising rather than the data that feeds google translate etc.
You learn by doing.
There's so much value in actually making something. People forget how much is in the details, or how much something like good design can differentiate.
You can sit on the sidelines forever thinking that your idea isn't 'different' enough, but the ones actually making stuff, listening to users, gaining the end-to-end experience, will actually have a larger 'luck surface area'.
Even if your idea gets taken, or someone comes along and does it better, or cheaper – there's value in _trying_.
Specifically regarding AI: the models existed for quite a while, for free, for anyone to use in OpenAI's Playground. But suddenly they hooked it up to a chat UI and it blew up completely. You never know what the key thing is going to be. But if you sit around forever, you're guaranteeing failure.
And can AI truly understand human emotions to handle sensitive customer issues or create artwork that resonates with people on a deeper level? There might be more to consider than just the cool tech.
Times are changing and I can't handle another hype cycle.
My CTO is already running around the halls talking to people about "have you done anything with ChatGPT in our projects...you should".
The reason there is so much debate on Gen AI is due to the emergence of unpredicted abilities.
Yes GenAI is insanely impressive - this is not a luddite argument. I have personally spent months on it, and continue to do so. ITS AWESOME.
However it really isnt going to do a tenth of the things people expect it to. Those emergent properties seem like actual reasoning, planning, or analysis.
Get to production data though, then the emperor has no clothes. Those "emergent" skills end up showing you how much correlation in text is good enough - provided the person reviewing the text is already an expert.
Every single AI hype article includes some version of this sentence! "While AI can be helpful for the repetitive boring work that other people do, the impact on MY work is more nuanced."
It's basically the face eating leopard meme. "AI wouldn't automate MY job" says president of AI job automation company.
Other avenues are more human accelerators than replacement. I have been around long enough that I if a tool presents a risk to someones job the tool often gets thrown down the stairs "accidentally". GE bought in hard to Google glass back in the day and tried having it walk through procedures for complex repair processes. A great idea if literally anyone in the field asked for it.
I'm with many that the hype train hit hard for "AI" and block chain, but LLMs for me do have real value and real application for some excellent use cases. I also find it an excellent sounding board for my own ideas, though the models tend to not want to disappoint you.
I'm not saying the hype isn't real but i'm definitely skeptical.
edit: for context my firm screamed to high heaven how the whole metaverse thing was a game changer too. I called that one BS right out of the gate.
I am a lot more pessimistic about the startup scene in this area.
Look how ChatGPT can teach languages[1], good luck building an AI powered language learning app…
It gets worse for startups because Google and OpenAI have a ton more context about me. For example in the language learning conversation Google can refer to my spoken samples from other places to improve the experience. And yet, no PM at Google needs to think of this, they only need to hook up the data and throw more compute at their models.
[1] https://twitter.com/dmvaldman/status/1707881743892746381
Speak for yourself. I have zero interest in having a conversation with the computer, whether typing or (especially) speaking out loud.
Product and UI and UX work will continue to be valuable; if anything, good quality work will stand out even more amongst the oncoming tidal wave of low quality AI/LLM stuff.
Good lord no. Never. Not in a trillion years will I be okay with interacting with these data hoovering blackboxes rather than just fucking clicking on the button with my mouse.
I thought the same and agreed with him.
Everything we do now is just working towards that super assistant.
I think in the future, your "phone" is just an interface between you and your assistant. Not much else. As an Apple shareholder, this is my biggest worry. iOS is suddenly a lot less necessary.
every web only consumer product basically died
what is true is that b2b products didn't need them, or needed them as "companion apps", not full replacements.
iPhone games > Yahoo Games