story
But what it already has begun to do, and will continue to do is change the way we interact with computers. The era of having a personal voice assistant that is capable, adaptable, and intuitive is VERY close and that is something that's exciting. Siri and Alexa are going to look downright primitive compared to what we'll have in the next 2-5 years and that is going to be VERY mainstream, and VERY useful for huge swaths of the population.
Crypto still hasn't proven itself to be useful in any way shape or form that isn't immediately over-shadowed by a different medium.
You’re treating it as a fact that LLM are going to replace existing products, at some unknown future date.
“In 5 years, all code will be written by AI”
“In 5 years, LLM will replace Siri and Alexa”
“In 5 years, AI will replace [sector of jobs]”
The thing that frustrates me about these statements is that you don’t know what AI technology is going to look like in 5 years, so stop treating it like a fact. It’s possible LLM are useful in all of these places, but we don’t know that yet.
That’s a fact.
I also know that voice-interfaces to date have been incredibly stiff and there is ample room for improvement. I know, for a fact, that having AI enable better voice interfaces will make computing better and more accessible. I have a hard time understanding how those are hype-driven comments and/or opinions.
We do know these things for a fact. Not being able to articulate exactly which breakthroughs will be most important doesn’t make it hype.
There doesn't seem to be a rush because it makes the implementation a lot more expensive, and those things are, I suspect, not profitable products (revenue sources) to their respective companies. They are a kind of enhancement to a layer of products and services; people take them for granted now and so you can't take them away.
A smarter Google Assistant would do nothing for Google's bottom line, and in fact it would cost more money to operate.
If it's not done right, it could ruin the experience. For instance, it cannot have worse latency on common queries than the old assistant.
All I did was hold it's hand, it wrote every line of code. You are living in fantasy land if you think we will be writing lines of code in 10 years.
I was with you until that sentence. No, LLMs will not write all our code and the reason is very simple: coding is easier than reviewing code. Not to mention the additional complexities and weirdness that we've always dealt with without even thinking about it.
We can see in Photoshop what's coming for developers: context-sensitive AI autocompletion and gap filling. Copilot but more mature and integrated, perhaps with additional checks that prevent some bugs being inserted. And troubleshooting, the area where I think we can profit the most.
but will ChatGPT help you debug and fix a production issue that came about due to a Kafka misconfiguration? will it be able to find the deadlock in your code that is causing requests to be dropped? will it suggest a path forward when you need to replace an obscure library that hasn't been updated in 5 years? will it be able to make sense of seemingly contradictory business requirements?
Wake me up when ChatGPT is able to write and maintain a POS system, or an online store with attached fulfillment management. Anything that goes beyond a fancy 100-line script. Anything that people actually hire teams of senior devs, business analysts and software architects for.
Pontificating their nonsense around the hype about LLMs to the point where they don't even trust it. The same thing they did with ConvNets and they still don't trust that either since they both hallucinate, frequently.
I can guarantee you that people will not trust an AI to fly a plane without any human pilots on board end-to-end (auto-pilot does not count) and it is simply due to the fundamental black-box nature of these so-called 'AI' models being untrustworthy in high risk situations.
I like to think of capable LLMs as pf gifted interns. I can expect decent results if I explain well enough, but I need processes around them to make sure they are doing what they are told. In my industry thats enough to produce a noticeable productivity gain, and likely some reduction of employment as its a low margin cut throat business relying on low grade knowledge workers. I see the hype and honestly cant stand it, but its measureably impacting my industry and the world around me.
Personally, I think LLMs are a step forward, but I suspect that GTP-4 is close to the limit of what’s possible with LLMs. I don’t think we’re going to see AGI from the same approach.
Stone ages. That’s not 5 years from now. That’s today.
I can't trust GPT, and neither can you. But if it really can do all your coding for you, what stops your employer from replacing you with a secretary from a temp agency?
It's so stupid for engineers to say that ChatGPT codes for them. They are shooting themselves in the face. They are devaluing the entire profession. Why? My reaction to all those breathless online demos was to point out the difference between what they were showing and what an engineer really does. Your reaction is to act like being a prompt jockey is the new way of engineering. How does that give you pride in yourself?
I do and ChatGPT code is rarely useful for me. I can prompt it well enough to do language related stuff for me, but the code it can write for me is more like a highly custom boilerplate that I still need to refactor.
Even for green field private projects, at first it looks fine, bit the bugs are more likely to be traced back to these snippets than not.
Year of the voice assistant is getting close to year of Linux on desktop.
What you’re promising has been promised time and time again, received endless hype cycles then collapsed once people realised the limits of the technology. Yes, this time the tech is much more capable than what came before but I’m inclined to believe we’ll yet again find a limit that means we’re using it for some things but our lives still aren’t drastically changed.
Case in point, I asked Siri to change my work address. She stated that I needed to use the Contacts app to do that. This is not very helpful. The issue here is not Siri’s inability to understand what I want, it is that the Contacts app does not support this method of data input. Siri is also probably not very good at extracting structured address information from me via natural language, but the new LLMs can do this easily.
…which is something an LLM won’t help with.
“Just design an open ended API capable of doing absolutely anything someone might ask ChatGPT to do” is not the simple task you’re making it out to be!
There's a reason why people describe ChatGPT as a "research tool": you often need to do a bunch of iterations to get it to do the correct thing. And that's fine because it's non-destructive. But it's very far from a world where you can let it loose on a production, writable database and trust that it's going to do the correct thing.
Intuitive to use? Or has intuition?
Google and Amazon have tried to sell theirs for a long time. And none were actually selling much. Amazon admitted to be selling theirs at a loss. Facebook has tried their own - and quickly cancelled them. Google's is in every Android device - and yet pretty much nobody uses them. Even Apple's Siri is more annoyance than help.
That something can be built doesn't mean it will sell or that people will actually want to use it. If you create a solution looking for an imaginary problem that your marketing thinks is what people want instead of a solution that solves a real existing problem, you do get a solution looking for a problem ...
Also, answering questions and communicating in natural language is the easy part of such assistant. For the thing to be useful it must be able to actually do something too. Which is incredibly difficult beyond the (closed) ecosystem of its vendor. Thirdparty integrations are usually driven by who pays the manufacturer for the SDK and partner contract (seen as a marketing opportunity), not by what the users actually want it to integrate with. Hoping for one of these with an open API that anyone could integrate whatever they want with, I am not holding my breath here.
OpenAI is already on it. The latest gen of GPT-3 and -4 are finetuned to respond to "do this thing" commands with JSON structured to:
- provide the name of a given function call
- provide arguments to that function call
it's "early stage", which in this case probably means "good enough to be useful within a month or two", given the rate at which these things have been developing.
Anecdotally, I've been playing with giving the models instructions like:
"When asked to perform a task that you need a tool to accomplish, you will call the tool according to its documentation by this format:
TOOL_NAME(*args)
Below you will find the documentation for your tools."
...and I've gotten it working pretty damn well (not even with the JSON-finetuned models, mind you). All you really need is python-style docstrings and a minimal parser and you're off to the races. I recommend anyone interested play with it a bit.
“Intuitive to use” roughly means that it is easy for a human to interact with.
“Intuition” is the ability to understand something immediately, without the need for conscious reasoning.
I don't really see either of those things as a real possibility. Within my lifetime, anyway.
Seems like it has proven very useful for Stripe [0], Moneygram [1], TicketMaster [2], etc.
Unlike AI which continues to consume tons of resources to burn the entire world down to the ground without any viable efficient methods of training, inference or fine-tuning their AI models in the past decade with its chatbot hype and gimmickry [3], crypto does not need to consume tons of CO2 to operate, thanks to alternative and greener consensus algorithms available in production today. [4]
Being 'useful' is not an excuse to destroy the planet around untrustworthy AI models getting themselves confused over a single pixel or hallucinating in the middle of the road.
[0] https://stripe.com/gb/use-cases/crypto
[1] https://stellar.org/moneygram
[2] https://business.ticketmaster.com/business-solutions/nft-tok...
[3] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...
[4] https://consensys.net/blog/press-release/ethereum-blockchain...